00:00:00.001 Started by upstream project "autotest-per-patch" build number 132311 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.171 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.269 > git --version # 'git version 2.39.2' 00:00:00.269 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.297 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.297 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.933 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.944 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.956 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.957 > git config core.sparsecheckout # timeout=10 00:00:06.967 > git read-tree -mu HEAD # timeout=10 00:00:06.983 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.001 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.001 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.105 [Pipeline] Start of Pipeline 00:00:07.117 [Pipeline] library 00:00:07.118 Loading library shm_lib@master 00:00:07.119 Library shm_lib@master is cached. Copying from home. 00:00:07.132 [Pipeline] node 00:00:07.140 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.142 [Pipeline] { 00:00:07.150 [Pipeline] catchError 00:00:07.152 [Pipeline] { 00:00:07.162 [Pipeline] wrap 00:00:07.170 [Pipeline] { 00:00:07.176 [Pipeline] stage 00:00:07.177 [Pipeline] { (Prologue) 00:00:07.383 [Pipeline] sh 00:00:07.678 + logger -p user.info -t JENKINS-CI 00:00:07.696 [Pipeline] echo 00:00:07.697 Node: WFP8 00:00:07.706 [Pipeline] sh 00:00:08.016 [Pipeline] setCustomBuildProperty 00:00:08.031 [Pipeline] echo 00:00:08.033 Cleanup processes 00:00:08.037 [Pipeline] sh 00:00:08.325 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.325 836020 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.337 [Pipeline] sh 00:00:08.622 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.622 ++ grep -v 'sudo pgrep' 00:00:08.622 ++ awk '{print $1}' 00:00:08.622 + sudo kill -9 00:00:08.622 + true 00:00:08.636 [Pipeline] cleanWs 00:00:08.646 [WS-CLEANUP] Deleting project workspace... 00:00:08.646 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.652 [WS-CLEANUP] done 00:00:08.657 [Pipeline] setCustomBuildProperty 00:00:08.678 [Pipeline] sh 00:00:08.963 + sudo git config --global --replace-all safe.directory '*' 00:00:09.060 [Pipeline] httpRequest 00:00:09.441 [Pipeline] echo 00:00:09.443 Sorcerer 10.211.164.20 is alive 00:00:09.453 [Pipeline] retry 00:00:09.455 [Pipeline] { 00:00:09.469 [Pipeline] httpRequest 00:00:09.474 HttpMethod: GET 00:00:09.474 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.475 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.484 Response Code: HTTP/1.1 200 OK 00:00:09.484 Success: Status code 200 is in the accepted range: 200,404 00:00:09.485 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.440 [Pipeline] } 00:00:19.459 [Pipeline] // retry 00:00:19.466 [Pipeline] sh 00:00:19.752 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.768 [Pipeline] httpRequest 00:00:20.641 [Pipeline] echo 00:00:20.643 Sorcerer 10.211.164.20 is alive 00:00:20.653 [Pipeline] retry 00:00:20.656 [Pipeline] { 00:00:20.671 [Pipeline] httpRequest 00:00:20.676 HttpMethod: GET 00:00:20.676 URL: http://10.211.164.20/packages/spdk_a7ec5bc8ef5534f546130eef18f2c0b180f6a4da.tar.gz 00:00:20.677 Sending request to url: http://10.211.164.20/packages/spdk_a7ec5bc8ef5534f546130eef18f2c0b180f6a4da.tar.gz 00:00:20.695 Response Code: HTTP/1.1 200 OK 00:00:20.695 Success: Status code 200 is in the accepted range: 200,404 00:00:20.696 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a7ec5bc8ef5534f546130eef18f2c0b180f6a4da.tar.gz 00:01:38.481 [Pipeline] } 00:01:38.497 [Pipeline] // retry 00:01:38.504 [Pipeline] sh 00:01:38.789 + tar --no-same-owner -xf spdk_a7ec5bc8ef5534f546130eef18f2c0b180f6a4da.tar.gz 00:01:41.337 [Pipeline] sh 00:01:41.622 + git -C spdk log --oneline -n5 00:01:41.622 a7ec5bc8e nvmf: added support for add/delete host wrt referral 00:01:41.622 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:01:41.622 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:01:41.622 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:01:41.622 fb6c49f2f bdev: add spdk_bdev_get_nvme_nsid() 00:01:41.632 [Pipeline] } 00:01:41.646 [Pipeline] // stage 00:01:41.656 [Pipeline] stage 00:01:41.658 [Pipeline] { (Prepare) 00:01:41.677 [Pipeline] writeFile 00:01:41.697 [Pipeline] sh 00:01:41.980 + logger -p user.info -t JENKINS-CI 00:01:41.991 [Pipeline] sh 00:01:42.274 + logger -p user.info -t JENKINS-CI 00:01:42.288 [Pipeline] sh 00:01:42.573 + cat autorun-spdk.conf 00:01:42.573 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.573 SPDK_TEST_NVMF=1 00:01:42.573 SPDK_TEST_NVME_CLI=1 00:01:42.573 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.573 SPDK_TEST_NVMF_NICS=e810 00:01:42.573 SPDK_TEST_VFIOUSER=1 00:01:42.573 SPDK_RUN_UBSAN=1 00:01:42.573 NET_TYPE=phy 00:01:42.580 RUN_NIGHTLY=0 00:01:42.586 [Pipeline] readFile 00:01:42.611 [Pipeline] withEnv 00:01:42.613 [Pipeline] { 00:01:42.625 [Pipeline] sh 00:01:42.908 + set -ex 00:01:42.908 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:42.908 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:42.908 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.908 ++ SPDK_TEST_NVMF=1 00:01:42.908 ++ SPDK_TEST_NVME_CLI=1 00:01:42.908 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.908 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.908 ++ SPDK_TEST_VFIOUSER=1 00:01:42.908 ++ SPDK_RUN_UBSAN=1 00:01:42.908 ++ NET_TYPE=phy 00:01:42.908 ++ RUN_NIGHTLY=0 00:01:42.908 + case $SPDK_TEST_NVMF_NICS in 00:01:42.908 + DRIVERS=ice 00:01:42.908 + [[ tcp == \r\d\m\a ]] 00:01:42.908 + [[ -n ice ]] 00:01:42.908 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:42.908 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:46.254 rmmod: ERROR: Module irdma is not currently loaded 00:01:46.254 rmmod: ERROR: Module i40iw is not currently loaded 00:01:46.254 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:46.254 + true 00:01:46.254 + for D in $DRIVERS 00:01:46.254 + sudo modprobe ice 00:01:46.254 + exit 0 00:01:46.263 [Pipeline] } 00:01:46.276 [Pipeline] // withEnv 00:01:46.281 [Pipeline] } 00:01:46.294 [Pipeline] // stage 00:01:46.302 [Pipeline] catchError 00:01:46.304 [Pipeline] { 00:01:46.316 [Pipeline] timeout 00:01:46.316 Timeout set to expire in 1 hr 0 min 00:01:46.317 [Pipeline] { 00:01:46.331 [Pipeline] stage 00:01:46.332 [Pipeline] { (Tests) 00:01:46.345 [Pipeline] sh 00:01:46.631 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.631 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.631 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.631 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:46.631 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.631 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:46.631 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:46.631 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:46.631 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:46.631 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:46.631 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:46.631 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:46.631 + source /etc/os-release 00:01:46.631 ++ NAME='Fedora Linux' 00:01:46.631 ++ VERSION='39 (Cloud Edition)' 00:01:46.631 ++ ID=fedora 00:01:46.631 ++ VERSION_ID=39 00:01:46.631 ++ VERSION_CODENAME= 00:01:46.631 ++ PLATFORM_ID=platform:f39 00:01:46.631 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.631 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.631 ++ LOGO=fedora-logo-icon 00:01:46.631 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.631 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.631 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.631 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.631 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.631 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.631 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.631 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.631 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.631 ++ SUPPORT_END=2024-11-12 00:01:46.631 ++ VARIANT='Cloud Edition' 00:01:46.631 ++ VARIANT_ID=cloud 00:01:46.631 + uname -a 00:01:46.631 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.631 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:49.168 Hugepages 00:01:49.168 node hugesize free / total 00:01:49.168 node0 1048576kB 0 / 0 00:01:49.168 node0 2048kB 1024 / 1024 00:01:49.168 node1 1048576kB 0 / 0 00:01:49.168 node1 2048kB 1024 / 1024 00:01:49.168 00:01:49.168 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.168 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:49.168 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:49.168 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:49.168 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:49.168 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:49.168 + rm -f /tmp/spdk-ld-path 00:01:49.168 + source autorun-spdk.conf 00:01:49.168 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.168 ++ SPDK_TEST_NVMF=1 00:01:49.168 ++ SPDK_TEST_NVME_CLI=1 00:01:49.168 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.168 ++ SPDK_TEST_NVMF_NICS=e810 00:01:49.168 ++ SPDK_TEST_VFIOUSER=1 00:01:49.168 ++ SPDK_RUN_UBSAN=1 00:01:49.168 ++ NET_TYPE=phy 00:01:49.168 ++ RUN_NIGHTLY=0 00:01:49.168 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.168 + [[ -n '' ]] 00:01:49.168 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.168 + for M in /var/spdk/build-*-manifest.txt 00:01:49.168 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:49.168 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.168 + for M in /var/spdk/build-*-manifest.txt 00:01:49.168 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.168 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.168 + for M in /var/spdk/build-*-manifest.txt 00:01:49.168 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.168 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.168 ++ uname 00:01:49.168 + [[ Linux == \L\i\n\u\x ]] 00:01:49.168 + sudo dmesg -T 00:01:49.428 + sudo dmesg --clear 00:01:49.428 + dmesg_pid=837465 00:01:49.428 + [[ Fedora Linux == FreeBSD ]] 00:01:49.428 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.428 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.428 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.428 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.428 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.428 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.428 + sudo dmesg -Tw 00:01:49.428 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.428 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.428 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.428 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.428 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.428 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.428 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.428 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.428 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.428 09:03:50 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.428 09:03:50 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:49.428 09:03:50 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:49.428 09:03:50 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:49.428 09:03:50 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.428 09:03:50 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.428 09:03:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.428 09:03:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.428 09:03:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.428 09:03:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.428 09:03:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.428 09:03:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.428 09:03:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.429 09:03:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.429 09:03:50 -- paths/export.sh@5 -- $ export PATH 00:01:49.429 09:03:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.429 09:03:50 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.429 09:03:50 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:49.429 09:03:50 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732003430.XXXXXX 00:01:49.429 09:03:50 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732003430.d1TJ1y 00:01:49.429 09:03:50 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:49.429 09:03:50 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:49.429 09:03:50 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:49.429 09:03:50 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:49.429 09:03:50 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.429 09:03:50 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:49.429 09:03:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:49.429 09:03:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.429 09:03:50 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:49.429 09:03:50 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:49.429 09:03:50 -- pm/common@17 -- $ local monitor 00:01:49.429 09:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.429 09:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.429 09:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.429 09:03:50 -- pm/common@21 -- $ date +%s 00:01:49.429 09:03:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.429 09:03:50 -- pm/common@21 -- $ date +%s 00:01:49.429 09:03:50 -- pm/common@25 -- $ sleep 1 00:01:49.429 09:03:50 -- pm/common@21 -- $ date +%s 00:01:49.688 09:03:50 -- pm/common@21 -- $ date +%s 00:01:49.688 09:03:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732003430 00:01:49.688 09:03:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732003430 00:01:49.688 09:03:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732003430 00:01:49.688 09:03:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732003430 00:01:49.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732003430_collect-cpu-load.pm.log 00:01:49.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732003430_collect-cpu-temp.pm.log 00:01:49.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732003430_collect-vmstat.pm.log 00:01:49.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732003430_collect-bmc-pm.bmc.pm.log 00:01:50.626 09:03:51 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:50.626 09:03:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.626 09:03:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.626 09:03:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.626 09:03:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.626 Tue Nov 19 08:03:51 AM UTC 2024 00:01:50.626 09:03:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.626 v25.01-pre-159-ga7ec5bc8e 00:01:50.626 09:03:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.626 09:03:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.626 09:03:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.626 09:03:51 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:50.626 09:03:51 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:50.626 09:03:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.626 ************************************ 00:01:50.626 START TEST ubsan 00:01:50.626 ************************************ 00:01:50.626 09:03:51 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:50.626 using ubsan 00:01:50.626 00:01:50.626 real 0m0.000s 00:01:50.626 user 0m0.000s 00:01:50.626 sys 0m0.000s 00:01:50.626 09:03:51 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:50.626 09:03:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.626 ************************************ 00:01:50.626 END TEST ubsan 00:01:50.626 ************************************ 00:01:50.626 09:03:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.626 09:03:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.626 09:03:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.626 09:03:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.626 09:03:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.626 09:03:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.626 09:03:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.626 09:03:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.626 09:03:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:50.885 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:50.885 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.144 Using 'verbs' RDMA provider 00:02:04.296 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:16.506 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:16.506 Creating mk/config.mk...done. 00:02:16.506 Creating mk/cc.flags.mk...done. 00:02:16.506 Type 'make' to build. 00:02:16.506 09:04:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:16.506 09:04:17 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:16.506 09:04:17 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:16.506 09:04:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.506 ************************************ 00:02:16.506 START TEST make 00:02:16.506 ************************************ 00:02:16.506 09:04:17 make -- common/autotest_common.sh@1127 -- $ make -j96 00:02:16.506 make[1]: Nothing to be done for 'all'. 00:02:17.888 The Meson build system 00:02:17.888 Version: 1.5.0 00:02:17.888 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:17.888 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.888 Build type: native build 00:02:17.888 Project name: libvfio-user 00:02:17.888 Project version: 0.0.1 00:02:17.888 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.888 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.888 Host machine cpu family: x86_64 00:02:17.888 Host machine cpu: x86_64 00:02:17.888 Run-time dependency threads found: YES 00:02:17.888 Library dl found: YES 00:02:17.888 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.888 Run-time dependency json-c found: YES 0.17 00:02:17.888 Run-time dependency cmocka found: YES 1.1.7 00:02:17.888 Program pytest-3 found: NO 00:02:17.888 Program flake8 found: NO 00:02:17.888 Program misspell-fixer found: NO 00:02:17.888 Program restructuredtext-lint found: NO 00:02:17.888 Program valgrind found: YES (/usr/bin/valgrind) 00:02:17.888 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.888 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.888 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.888 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:17.888 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:17.888 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:17.888 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:17.888 Build targets in project: 8 00:02:17.888 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:17.888 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:17.888 00:02:17.888 libvfio-user 0.0.1 00:02:17.888 00:02:17.888 User defined options 00:02:17.888 buildtype : debug 00:02:17.888 default_library: shared 00:02:17.888 libdir : /usr/local/lib 00:02:17.888 00:02:17.888 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.456 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:18.715 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:18.715 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:18.715 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:18.715 [4/37] Compiling C object samples/null.p/null.c.o 00:02:18.715 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:18.715 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:18.715 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:18.715 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:18.715 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:18.715 [10/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:18.715 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:18.715 [12/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:18.715 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:18.715 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:18.715 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:18.715 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:18.715 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:18.715 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:18.715 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:18.715 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:18.715 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:18.715 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:18.715 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:18.715 [24/37] Compiling C object samples/client.p/client.c.o 00:02:18.715 [25/37] Compiling C object samples/server.p/server.c.o 00:02:18.715 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:18.715 [27/37] Linking target samples/client 00:02:18.715 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:18.715 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:18.715 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:18.715 [31/37] Linking target test/unit_tests 00:02:18.974 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:18.974 [33/37] Linking target samples/null 00:02:18.974 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:18.974 [35/37] Linking target samples/server 00:02:18.974 [36/37] Linking target samples/lspci 00:02:18.974 [37/37] Linking target samples/gpio-pci-idio-16 00:02:18.974 INFO: autodetecting backend as ninja 00:02:18.974 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:18.974 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:19.541 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:19.541 ninja: no work to do. 00:02:24.816 The Meson build system 00:02:24.816 Version: 1.5.0 00:02:24.816 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:24.816 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:24.816 Build type: native build 00:02:24.816 Program cat found: YES (/usr/bin/cat) 00:02:24.816 Project name: DPDK 00:02:24.816 Project version: 24.03.0 00:02:24.816 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:24.816 C linker for the host machine: cc ld.bfd 2.40-14 00:02:24.816 Host machine cpu family: x86_64 00:02:24.816 Host machine cpu: x86_64 00:02:24.816 Message: ## Building in Developer Mode ## 00:02:24.816 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.816 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:24.816 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.816 Program python3 found: YES (/usr/bin/python3) 00:02:24.816 Program cat found: YES (/usr/bin/cat) 00:02:24.816 Compiler for C supports arguments -march=native: YES 00:02:24.816 Checking for size of "void *" : 8 00:02:24.816 Checking for size of "void *" : 8 (cached) 00:02:24.816 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:24.816 Library m found: YES 00:02:24.816 Library numa found: YES 00:02:24.816 Has header "numaif.h" : YES 00:02:24.816 Library fdt found: NO 00:02:24.816 Library execinfo found: NO 00:02:24.816 Has header "execinfo.h" : YES 00:02:24.816 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:24.816 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.816 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.816 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.816 Run-time dependency openssl found: YES 3.1.1 00:02:24.816 Run-time dependency libpcap found: YES 1.10.4 00:02:24.816 Has header "pcap.h" with dependency libpcap: YES 00:02:24.816 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.816 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.816 Compiler for C supports arguments -Wformat: YES 00:02:24.816 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:24.816 Compiler for C supports arguments -Wformat-security: NO 00:02:24.816 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.816 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.816 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.816 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.816 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.816 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.816 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.816 Compiler for C supports arguments -Wundef: YES 00:02:24.816 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.816 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.816 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:24.816 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.816 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:24.816 Program objdump found: YES (/usr/bin/objdump) 00:02:24.816 Compiler for C supports arguments -mavx512f: YES 00:02:24.816 Checking if "AVX512 checking" compiles: YES 00:02:24.816 Fetching value of define "__SSE4_2__" : 1 00:02:24.816 Fetching value of define "__AES__" : 1 00:02:24.816 Fetching value of define "__AVX__" : 1 00:02:24.816 Fetching value of define "__AVX2__" : 1 00:02:24.816 Fetching value of define "__AVX512BW__" : 1 00:02:24.816 Fetching value of define "__AVX512CD__" : 1 00:02:24.816 Fetching value of define "__AVX512DQ__" : 1 00:02:24.816 Fetching value of define "__AVX512F__" : 1 00:02:24.816 Fetching value of define "__AVX512VL__" : 1 00:02:24.816 Fetching value of define "__PCLMUL__" : 1 00:02:24.816 Fetching value of define "__RDRND__" : 1 00:02:24.816 Fetching value of define "__RDSEED__" : 1 00:02:24.816 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:24.816 Fetching value of define "__znver1__" : (undefined) 00:02:24.816 Fetching value of define "__znver2__" : (undefined) 00:02:24.816 Fetching value of define "__znver3__" : (undefined) 00:02:24.816 Fetching value of define "__znver4__" : (undefined) 00:02:24.816 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:24.816 Message: lib/log: Defining dependency "log" 00:02:24.816 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.816 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.816 Checking for function "getentropy" : NO 00:02:24.816 Message: lib/eal: Defining dependency "eal" 00:02:24.816 Message: lib/ring: Defining dependency "ring" 00:02:24.816 Message: lib/rcu: Defining dependency "rcu" 00:02:24.816 Message: lib/mempool: Defining dependency "mempool" 00:02:24.816 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.816 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.816 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:24.816 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:24.816 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:24.816 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:24.816 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:24.816 Compiler for C supports arguments -mpclmul: YES 00:02:24.816 Compiler for C supports arguments -maes: YES 00:02:24.816 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.816 Compiler for C supports arguments -mavx512bw: YES 00:02:24.816 Compiler for C supports arguments -mavx512dq: YES 00:02:24.816 Compiler for C supports arguments -mavx512vl: YES 00:02:24.816 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.816 Compiler for C supports arguments -mavx2: YES 00:02:24.816 Compiler for C supports arguments -mavx: YES 00:02:24.816 Message: lib/net: Defining dependency "net" 00:02:24.816 Message: lib/meter: Defining dependency "meter" 00:02:24.816 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.816 Message: lib/pci: Defining dependency "pci" 00:02:24.816 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.816 Message: lib/hash: Defining dependency "hash" 00:02:24.816 Message: lib/timer: Defining dependency "timer" 00:02:24.816 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.816 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.816 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.816 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.816 Message: lib/power: Defining dependency "power" 00:02:24.816 Message: lib/reorder: Defining dependency "reorder" 00:02:24.816 Message: lib/security: Defining dependency "security" 00:02:24.816 Has header "linux/userfaultfd.h" : YES 00:02:24.816 Has header "linux/vduse.h" : YES 00:02:24.816 Message: lib/vhost: Defining dependency "vhost" 00:02:24.816 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:24.816 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.816 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.816 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.816 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:24.816 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:24.816 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:24.816 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:24.816 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:24.816 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:24.816 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:24.816 Configuring doxy-api-html.conf using configuration 00:02:24.816 Configuring doxy-api-man.conf using configuration 00:02:24.816 Program mandb found: YES (/usr/bin/mandb) 00:02:24.816 Program sphinx-build found: NO 00:02:24.816 Configuring rte_build_config.h using configuration 00:02:24.816 Message: 00:02:24.816 ================= 00:02:24.816 Applications Enabled 00:02:24.816 ================= 00:02:24.816 00:02:24.816 apps: 00:02:24.816 00:02:24.816 00:02:24.817 Message: 00:02:24.817 ================= 00:02:24.817 Libraries Enabled 00:02:24.817 ================= 00:02:24.817 00:02:24.817 libs: 00:02:24.817 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.817 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:24.817 cryptodev, dmadev, power, reorder, security, vhost, 00:02:24.817 00:02:24.817 Message: 00:02:24.817 =============== 00:02:24.817 Drivers Enabled 00:02:24.817 =============== 00:02:24.817 00:02:24.817 common: 00:02:24.817 00:02:24.817 bus: 00:02:24.817 pci, vdev, 00:02:24.817 mempool: 00:02:24.817 ring, 00:02:24.817 dma: 00:02:24.817 00:02:24.817 net: 00:02:24.817 00:02:24.817 crypto: 00:02:24.817 00:02:24.817 compress: 00:02:24.817 00:02:24.817 vdpa: 00:02:24.817 00:02:24.817 00:02:24.817 Message: 00:02:24.817 ================= 00:02:24.817 Content Skipped 00:02:24.817 ================= 00:02:24.817 00:02:24.817 apps: 00:02:24.817 dumpcap: explicitly disabled via build config 00:02:24.817 graph: explicitly disabled via build config 00:02:24.817 pdump: explicitly disabled via build config 00:02:24.817 proc-info: explicitly disabled via build config 00:02:24.817 test-acl: explicitly disabled via build config 00:02:24.817 test-bbdev: explicitly disabled via build config 00:02:24.817 test-cmdline: explicitly disabled via build config 00:02:24.817 test-compress-perf: explicitly disabled via build config 00:02:24.817 test-crypto-perf: explicitly disabled via build config 00:02:24.817 test-dma-perf: explicitly disabled via build config 00:02:24.817 test-eventdev: explicitly disabled via build config 00:02:24.817 test-fib: explicitly disabled via build config 00:02:24.817 test-flow-perf: explicitly disabled via build config 00:02:24.817 test-gpudev: explicitly disabled via build config 00:02:24.817 test-mldev: explicitly disabled via build config 00:02:24.817 test-pipeline: explicitly disabled via build config 00:02:24.817 test-pmd: explicitly disabled via build config 00:02:24.817 test-regex: explicitly disabled via build config 00:02:24.817 test-sad: explicitly disabled via build config 00:02:24.817 test-security-perf: explicitly disabled via build config 00:02:24.817 00:02:24.817 libs: 00:02:24.817 argparse: explicitly disabled via build config 00:02:24.817 metrics: explicitly disabled via build config 00:02:24.817 acl: explicitly disabled via build config 00:02:24.817 bbdev: explicitly disabled via build config 00:02:24.817 bitratestats: explicitly disabled via build config 00:02:24.817 bpf: explicitly disabled via build config 00:02:24.817 cfgfile: explicitly disabled via build config 00:02:24.817 distributor: explicitly disabled via build config 00:02:24.817 efd: explicitly disabled via build config 00:02:24.817 eventdev: explicitly disabled via build config 00:02:24.817 dispatcher: explicitly disabled via build config 00:02:24.817 gpudev: explicitly disabled via build config 00:02:24.817 gro: explicitly disabled via build config 00:02:24.817 gso: explicitly disabled via build config 00:02:24.817 ip_frag: explicitly disabled via build config 00:02:24.817 jobstats: explicitly disabled via build config 00:02:24.817 latencystats: explicitly disabled via build config 00:02:24.817 lpm: explicitly disabled via build config 00:02:24.817 member: explicitly disabled via build config 00:02:24.817 pcapng: explicitly disabled via build config 00:02:24.817 rawdev: explicitly disabled via build config 00:02:24.817 regexdev: explicitly disabled via build config 00:02:24.817 mldev: explicitly disabled via build config 00:02:24.817 rib: explicitly disabled via build config 00:02:24.817 sched: explicitly disabled via build config 00:02:24.817 stack: explicitly disabled via build config 00:02:24.817 ipsec: explicitly disabled via build config 00:02:24.817 pdcp: explicitly disabled via build config 00:02:24.817 fib: explicitly disabled via build config 00:02:24.817 port: explicitly disabled via build config 00:02:24.817 pdump: explicitly disabled via build config 00:02:24.817 table: explicitly disabled via build config 00:02:24.817 pipeline: explicitly disabled via build config 00:02:24.817 graph: explicitly disabled via build config 00:02:24.817 node: explicitly disabled via build config 00:02:24.817 00:02:24.817 drivers: 00:02:24.817 common/cpt: not in enabled drivers build config 00:02:24.817 common/dpaax: not in enabled drivers build config 00:02:24.817 common/iavf: not in enabled drivers build config 00:02:24.817 common/idpf: not in enabled drivers build config 00:02:24.817 common/ionic: not in enabled drivers build config 00:02:24.817 common/mvep: not in enabled drivers build config 00:02:24.817 common/octeontx: not in enabled drivers build config 00:02:24.817 bus/auxiliary: not in enabled drivers build config 00:02:24.817 bus/cdx: not in enabled drivers build config 00:02:24.817 bus/dpaa: not in enabled drivers build config 00:02:24.817 bus/fslmc: not in enabled drivers build config 00:02:24.817 bus/ifpga: not in enabled drivers build config 00:02:24.817 bus/platform: not in enabled drivers build config 00:02:24.817 bus/uacce: not in enabled drivers build config 00:02:24.817 bus/vmbus: not in enabled drivers build config 00:02:24.817 common/cnxk: not in enabled drivers build config 00:02:24.817 common/mlx5: not in enabled drivers build config 00:02:24.817 common/nfp: not in enabled drivers build config 00:02:24.817 common/nitrox: not in enabled drivers build config 00:02:24.817 common/qat: not in enabled drivers build config 00:02:24.817 common/sfc_efx: not in enabled drivers build config 00:02:24.817 mempool/bucket: not in enabled drivers build config 00:02:24.817 mempool/cnxk: not in enabled drivers build config 00:02:24.817 mempool/dpaa: not in enabled drivers build config 00:02:24.817 mempool/dpaa2: not in enabled drivers build config 00:02:24.817 mempool/octeontx: not in enabled drivers build config 00:02:24.817 mempool/stack: not in enabled drivers build config 00:02:24.817 dma/cnxk: not in enabled drivers build config 00:02:24.817 dma/dpaa: not in enabled drivers build config 00:02:24.817 dma/dpaa2: not in enabled drivers build config 00:02:24.817 dma/hisilicon: not in enabled drivers build config 00:02:24.817 dma/idxd: not in enabled drivers build config 00:02:24.817 dma/ioat: not in enabled drivers build config 00:02:24.817 dma/skeleton: not in enabled drivers build config 00:02:24.817 net/af_packet: not in enabled drivers build config 00:02:24.817 net/af_xdp: not in enabled drivers build config 00:02:24.817 net/ark: not in enabled drivers build config 00:02:24.817 net/atlantic: not in enabled drivers build config 00:02:24.817 net/avp: not in enabled drivers build config 00:02:24.817 net/axgbe: not in enabled drivers build config 00:02:24.817 net/bnx2x: not in enabled drivers build config 00:02:24.817 net/bnxt: not in enabled drivers build config 00:02:24.817 net/bonding: not in enabled drivers build config 00:02:24.817 net/cnxk: not in enabled drivers build config 00:02:24.817 net/cpfl: not in enabled drivers build config 00:02:24.817 net/cxgbe: not in enabled drivers build config 00:02:24.817 net/dpaa: not in enabled drivers build config 00:02:24.817 net/dpaa2: not in enabled drivers build config 00:02:24.817 net/e1000: not in enabled drivers build config 00:02:24.817 net/ena: not in enabled drivers build config 00:02:24.817 net/enetc: not in enabled drivers build config 00:02:24.817 net/enetfec: not in enabled drivers build config 00:02:24.817 net/enic: not in enabled drivers build config 00:02:24.817 net/failsafe: not in enabled drivers build config 00:02:24.817 net/fm10k: not in enabled drivers build config 00:02:24.817 net/gve: not in enabled drivers build config 00:02:24.817 net/hinic: not in enabled drivers build config 00:02:24.817 net/hns3: not in enabled drivers build config 00:02:24.817 net/i40e: not in enabled drivers build config 00:02:24.817 net/iavf: not in enabled drivers build config 00:02:24.817 net/ice: not in enabled drivers build config 00:02:24.817 net/idpf: not in enabled drivers build config 00:02:24.817 net/igc: not in enabled drivers build config 00:02:24.817 net/ionic: not in enabled drivers build config 00:02:24.817 net/ipn3ke: not in enabled drivers build config 00:02:24.817 net/ixgbe: not in enabled drivers build config 00:02:24.817 net/mana: not in enabled drivers build config 00:02:24.817 net/memif: not in enabled drivers build config 00:02:24.817 net/mlx4: not in enabled drivers build config 00:02:24.817 net/mlx5: not in enabled drivers build config 00:02:24.817 net/mvneta: not in enabled drivers build config 00:02:24.817 net/mvpp2: not in enabled drivers build config 00:02:24.817 net/netvsc: not in enabled drivers build config 00:02:24.817 net/nfb: not in enabled drivers build config 00:02:24.817 net/nfp: not in enabled drivers build config 00:02:24.817 net/ngbe: not in enabled drivers build config 00:02:24.817 net/null: not in enabled drivers build config 00:02:24.817 net/octeontx: not in enabled drivers build config 00:02:24.817 net/octeon_ep: not in enabled drivers build config 00:02:24.817 net/pcap: not in enabled drivers build config 00:02:24.817 net/pfe: not in enabled drivers build config 00:02:24.817 net/qede: not in enabled drivers build config 00:02:24.817 net/ring: not in enabled drivers build config 00:02:24.817 net/sfc: not in enabled drivers build config 00:02:24.817 net/softnic: not in enabled drivers build config 00:02:24.817 net/tap: not in enabled drivers build config 00:02:24.817 net/thunderx: not in enabled drivers build config 00:02:24.817 net/txgbe: not in enabled drivers build config 00:02:24.817 net/vdev_netvsc: not in enabled drivers build config 00:02:24.817 net/vhost: not in enabled drivers build config 00:02:24.817 net/virtio: not in enabled drivers build config 00:02:24.817 net/vmxnet3: not in enabled drivers build config 00:02:24.817 raw/*: missing internal dependency, "rawdev" 00:02:24.817 crypto/armv8: not in enabled drivers build config 00:02:24.817 crypto/bcmfs: not in enabled drivers build config 00:02:24.817 crypto/caam_jr: not in enabled drivers build config 00:02:24.817 crypto/ccp: not in enabled drivers build config 00:02:24.817 crypto/cnxk: not in enabled drivers build config 00:02:24.817 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.817 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.818 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.818 crypto/mlx5: not in enabled drivers build config 00:02:24.818 crypto/mvsam: not in enabled drivers build config 00:02:24.818 crypto/nitrox: not in enabled drivers build config 00:02:24.818 crypto/null: not in enabled drivers build config 00:02:24.818 crypto/octeontx: not in enabled drivers build config 00:02:24.818 crypto/openssl: not in enabled drivers build config 00:02:24.818 crypto/scheduler: not in enabled drivers build config 00:02:24.818 crypto/uadk: not in enabled drivers build config 00:02:24.818 crypto/virtio: not in enabled drivers build config 00:02:24.818 compress/isal: not in enabled drivers build config 00:02:24.818 compress/mlx5: not in enabled drivers build config 00:02:24.818 compress/nitrox: not in enabled drivers build config 00:02:24.818 compress/octeontx: not in enabled drivers build config 00:02:24.818 compress/zlib: not in enabled drivers build config 00:02:24.818 regex/*: missing internal dependency, "regexdev" 00:02:24.818 ml/*: missing internal dependency, "mldev" 00:02:24.818 vdpa/ifc: not in enabled drivers build config 00:02:24.818 vdpa/mlx5: not in enabled drivers build config 00:02:24.818 vdpa/nfp: not in enabled drivers build config 00:02:24.818 vdpa/sfc: not in enabled drivers build config 00:02:24.818 event/*: missing internal dependency, "eventdev" 00:02:24.818 baseband/*: missing internal dependency, "bbdev" 00:02:24.818 gpu/*: missing internal dependency, "gpudev" 00:02:24.818 00:02:24.818 00:02:24.818 Build targets in project: 85 00:02:24.818 00:02:24.818 DPDK 24.03.0 00:02:24.818 00:02:24.818 User defined options 00:02:24.818 buildtype : debug 00:02:24.818 default_library : shared 00:02:24.818 libdir : lib 00:02:24.818 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:24.818 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:24.818 c_link_args : 00:02:24.818 cpu_instruction_set: native 00:02:24.818 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:24.818 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:24.818 enable_docs : false 00:02:24.818 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:24.818 enable_kmods : false 00:02:24.818 max_lcores : 128 00:02:24.818 tests : false 00:02:24.818 00:02:24.818 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.089 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:25.089 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.348 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.348 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.348 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.348 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.348 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.348 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.348 [8/268] Linking static target lib/librte_kvargs.a 00:02:25.348 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.348 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:25.348 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.348 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.348 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.348 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.348 [15/268] Linking static target lib/librte_log.a 00:02:25.348 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.348 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:25.348 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.348 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.348 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.348 [21/268] Linking static target lib/librte_pci.a 00:02:25.607 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.607 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.607 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.607 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:25.607 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:25.607 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:25.607 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:25.607 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:25.607 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.865 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:25.865 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:25.865 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:25.865 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:25.865 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.865 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.865 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:25.865 [38/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:25.865 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:25.865 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:25.865 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:25.865 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:25.865 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.865 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.865 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.865 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.865 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.865 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:25.865 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.865 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:25.865 [51/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.865 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:25.865 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.865 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:25.865 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.865 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.865 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:25.865 [58/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:25.865 [59/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.865 [60/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:25.865 [61/268] Linking static target lib/librte_meter.a 00:02:25.865 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:25.865 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:25.865 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:25.865 [65/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.865 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.865 [67/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:25.865 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:25.865 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.865 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.865 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:25.865 [72/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.865 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.865 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:25.865 [75/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:25.865 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.865 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.865 [78/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.865 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:25.865 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:25.865 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:25.865 [82/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.865 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:25.865 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:25.865 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:25.865 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:25.865 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:25.865 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:25.865 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:25.865 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:25.865 [91/268] Linking static target lib/librte_ring.a 00:02:25.865 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:25.865 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:25.865 [94/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.865 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.865 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:25.865 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.865 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:25.865 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.865 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:25.865 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.865 [102/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.865 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.865 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.865 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:25.865 [106/268] Linking static target lib/librte_telemetry.a 00:02:25.865 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.865 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.865 [109/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.865 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.866 [111/268] Linking static target lib/librte_mempool.a 00:02:25.866 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.866 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:25.866 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.866 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:25.866 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.866 [117/268] Linking static target lib/librte_net.a 00:02:25.866 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.866 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.124 [120/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:26.124 [121/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:26.124 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.124 [123/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.124 [124/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.124 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:26.124 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:26.124 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.124 [128/268] Linking static target lib/librte_cmdline.a 00:02:26.124 [129/268] Linking static target lib/librte_rcu.a 00:02:26.124 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.124 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.124 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:26.124 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.124 [134/268] Linking static target lib/librte_eal.a 00:02:26.124 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.124 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.124 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:26.124 [138/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:26.124 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:26.124 [140/268] Linking static target lib/librte_mbuf.a 00:02:26.124 [141/268] Linking target lib/librte_log.so.24.1 00:02:26.124 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.124 [143/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:26.124 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:26.124 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:26.124 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.124 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:26.124 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.124 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.383 [150/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.383 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:26.383 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.383 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.383 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:26.383 [155/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.383 [156/268] Linking static target lib/librte_timer.a 00:02:26.383 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:26.383 [158/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.383 [159/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:26.383 [160/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.383 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.383 [162/268] Linking target lib/librte_kvargs.so.24.1 00:02:26.383 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:26.383 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:26.383 [165/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.383 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:26.383 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:26.383 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.383 [169/268] Linking static target lib/librte_power.a 00:02:26.383 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.383 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.383 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:26.383 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.383 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:26.383 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.383 [176/268] Linking static target lib/librte_dmadev.a 00:02:26.383 [177/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.383 [178/268] Linking static target lib/librte_compressdev.a 00:02:26.383 [179/268] Linking target lib/librte_telemetry.so.24.1 00:02:26.383 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.383 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.384 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.384 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.384 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.384 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.384 [186/268] Linking static target lib/librte_reorder.a 00:02:26.384 [187/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.643 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.643 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:26.643 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.643 [191/268] Linking static target lib/librte_security.a 00:02:26.643 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.643 [193/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.643 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:26.643 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:26.643 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.643 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.643 [198/268] Linking static target lib/librte_hash.a 00:02:26.643 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.643 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.643 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.643 [202/268] Linking static target drivers/librte_bus_vdev.a 00:02:26.643 [203/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.643 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.643 [205/268] Linking static target lib/librte_cryptodev.a 00:02:26.643 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.643 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.643 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.643 [209/268] Linking static target drivers/librte_bus_pci.a 00:02:26.643 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:26.902 [211/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.902 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.902 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.902 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:26.902 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.902 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:26.902 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.902 [218/268] Linking static target lib/librte_ethdev.a 00:02:26.902 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.161 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.161 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.161 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.161 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.161 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:27.420 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.420 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.420 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.354 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.354 [229/268] Linking static target lib/librte_vhost.a 00:02:28.612 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.991 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.263 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.200 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.200 [234/268] Linking target lib/librte_eal.so.24.1 00:02:36.200 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:36.200 [236/268] Linking target lib/librte_ring.so.24.1 00:02:36.200 [237/268] Linking target lib/librte_meter.so.24.1 00:02:36.200 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:36.200 [239/268] Linking target lib/librte_timer.so.24.1 00:02:36.200 [240/268] Linking target lib/librte_pci.so.24.1 00:02:36.200 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:36.460 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:36.460 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:36.460 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:36.460 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:36.460 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:36.460 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:36.460 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:36.460 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:36.719 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:36.719 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:36.719 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:36.719 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:36.719 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:36.719 [255/268] Linking target lib/librte_net.so.24.1 00:02:36.719 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:36.719 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:36.719 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:36.977 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:36.977 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:36.977 [261/268] Linking target lib/librte_hash.so.24.1 00:02:36.977 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:36.977 [263/268] Linking target lib/librte_security.so.24.1 00:02:36.977 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:36.977 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:37.235 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:37.235 [267/268] Linking target lib/librte_power.so.24.1 00:02:37.235 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:37.235 INFO: autodetecting backend as ninja 00:02:37.235 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:49.444 CC lib/log/log.o 00:02:49.444 CC lib/log/log_flags.o 00:02:49.444 CC lib/log/log_deprecated.o 00:02:49.444 CC lib/ut/ut.o 00:02:49.444 CC lib/ut_mock/mock.o 00:02:49.444 LIB libspdk_ut_mock.a 00:02:49.444 LIB libspdk_ut.a 00:02:49.444 LIB libspdk_log.a 00:02:49.444 SO libspdk_ut.so.2.0 00:02:49.444 SO libspdk_ut_mock.so.6.0 00:02:49.444 SO libspdk_log.so.7.1 00:02:49.444 SYMLINK libspdk_ut.so 00:02:49.444 SYMLINK libspdk_ut_mock.so 00:02:49.444 SYMLINK libspdk_log.so 00:02:49.444 CC lib/ioat/ioat.o 00:02:49.444 CXX lib/trace_parser/trace.o 00:02:49.444 CC lib/dma/dma.o 00:02:49.444 CC lib/util/base64.o 00:02:49.444 CC lib/util/bit_array.o 00:02:49.444 CC lib/util/cpuset.o 00:02:49.444 CC lib/util/crc16.o 00:02:49.444 CC lib/util/crc32.o 00:02:49.444 CC lib/util/crc32c.o 00:02:49.444 CC lib/util/crc32_ieee.o 00:02:49.444 CC lib/util/crc64.o 00:02:49.444 CC lib/util/dif.o 00:02:49.444 CC lib/util/fd.o 00:02:49.444 CC lib/util/fd_group.o 00:02:49.444 CC lib/util/file.o 00:02:49.444 CC lib/util/hexlify.o 00:02:49.444 CC lib/util/iov.o 00:02:49.444 CC lib/util/math.o 00:02:49.444 CC lib/util/net.o 00:02:49.444 CC lib/util/pipe.o 00:02:49.444 CC lib/util/strerror_tls.o 00:02:49.444 CC lib/util/string.o 00:02:49.444 CC lib/util/uuid.o 00:02:49.444 CC lib/util/xor.o 00:02:49.444 CC lib/util/zipf.o 00:02:49.444 CC lib/util/md5.o 00:02:49.444 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.444 CC lib/vfio_user/host/vfio_user.o 00:02:49.444 LIB libspdk_dma.a 00:02:49.444 SO libspdk_dma.so.5.0 00:02:49.444 LIB libspdk_ioat.a 00:02:49.444 SYMLINK libspdk_dma.so 00:02:49.444 SO libspdk_ioat.so.7.0 00:02:49.444 SYMLINK libspdk_ioat.so 00:02:49.444 LIB libspdk_vfio_user.a 00:02:49.444 SO libspdk_vfio_user.so.5.0 00:02:49.444 LIB libspdk_util.a 00:02:49.444 SYMLINK libspdk_vfio_user.so 00:02:49.444 SO libspdk_util.so.10.1 00:02:49.444 SYMLINK libspdk_util.so 00:02:49.444 LIB libspdk_trace_parser.a 00:02:49.444 SO libspdk_trace_parser.so.6.0 00:02:49.444 SYMLINK libspdk_trace_parser.so 00:02:49.444 CC lib/env_dpdk/env.o 00:02:49.444 CC lib/conf/conf.o 00:02:49.444 CC lib/vmd/vmd.o 00:02:49.444 CC lib/env_dpdk/memory.o 00:02:49.444 CC lib/env_dpdk/pci.o 00:02:49.444 CC lib/idxd/idxd.o 00:02:49.444 CC lib/vmd/led.o 00:02:49.444 CC lib/env_dpdk/init.o 00:02:49.444 CC lib/idxd/idxd_user.o 00:02:49.444 CC lib/idxd/idxd_kernel.o 00:02:49.444 CC lib/env_dpdk/threads.o 00:02:49.444 CC lib/env_dpdk/pci_ioat.o 00:02:49.444 CC lib/rdma_provider/common.o 00:02:49.444 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.444 CC lib/env_dpdk/pci_virtio.o 00:02:49.444 CC lib/json/json_parse.o 00:02:49.444 CC lib/env_dpdk/pci_vmd.o 00:02:49.444 CC lib/env_dpdk/pci_idxd.o 00:02:49.444 CC lib/json/json_util.o 00:02:49.444 CC lib/json/json_write.o 00:02:49.444 CC lib/env_dpdk/pci_event.o 00:02:49.445 CC lib/env_dpdk/sigbus_handler.o 00:02:49.445 CC lib/env_dpdk/pci_dpdk.o 00:02:49.445 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:49.445 CC lib/rdma_utils/rdma_utils.o 00:02:49.445 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.445 LIB libspdk_rdma_provider.a 00:02:49.445 SO libspdk_rdma_provider.so.6.0 00:02:49.445 LIB libspdk_conf.a 00:02:49.445 SO libspdk_conf.so.6.0 00:02:49.445 LIB libspdk_rdma_utils.a 00:02:49.445 LIB libspdk_json.a 00:02:49.445 SYMLINK libspdk_rdma_provider.so 00:02:49.445 SO libspdk_rdma_utils.so.1.0 00:02:49.445 SO libspdk_json.so.6.0 00:02:49.445 SYMLINK libspdk_conf.so 00:02:49.445 SYMLINK libspdk_rdma_utils.so 00:02:49.445 SYMLINK libspdk_json.so 00:02:49.445 LIB libspdk_idxd.a 00:02:49.445 SO libspdk_idxd.so.12.1 00:02:49.704 LIB libspdk_vmd.a 00:02:49.704 SO libspdk_vmd.so.6.0 00:02:49.704 SYMLINK libspdk_idxd.so 00:02:49.704 SYMLINK libspdk_vmd.so 00:02:49.704 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.704 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.704 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.704 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.975 LIB libspdk_jsonrpc.a 00:02:49.975 SO libspdk_jsonrpc.so.6.0 00:02:49.975 SYMLINK libspdk_jsonrpc.so 00:02:49.975 LIB libspdk_env_dpdk.a 00:02:50.316 SO libspdk_env_dpdk.so.15.1 00:02:50.316 SYMLINK libspdk_env_dpdk.so 00:02:50.316 CC lib/rpc/rpc.o 00:02:50.599 LIB libspdk_rpc.a 00:02:50.599 SO libspdk_rpc.so.6.0 00:02:50.599 SYMLINK libspdk_rpc.so 00:02:50.871 CC lib/keyring/keyring.o 00:02:50.871 CC lib/notify/notify.o 00:02:50.871 CC lib/notify/notify_rpc.o 00:02:50.871 CC lib/keyring/keyring_rpc.o 00:02:51.169 CC lib/trace/trace.o 00:02:51.169 CC lib/trace/trace_flags.o 00:02:51.169 CC lib/trace/trace_rpc.o 00:02:51.169 LIB libspdk_notify.a 00:02:51.169 SO libspdk_notify.so.6.0 00:02:51.169 LIB libspdk_keyring.a 00:02:51.169 LIB libspdk_trace.a 00:02:51.169 SYMLINK libspdk_notify.so 00:02:51.169 SO libspdk_keyring.so.2.0 00:02:51.169 SO libspdk_trace.so.11.0 00:02:51.450 SYMLINK libspdk_keyring.so 00:02:51.450 SYMLINK libspdk_trace.so 00:02:51.710 CC lib/sock/sock.o 00:02:51.710 CC lib/sock/sock_rpc.o 00:02:51.710 CC lib/thread/thread.o 00:02:51.710 CC lib/thread/iobuf.o 00:02:51.970 LIB libspdk_sock.a 00:02:51.970 SO libspdk_sock.so.10.0 00:02:51.970 SYMLINK libspdk_sock.so 00:02:52.230 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.230 CC lib/nvme/nvme_ctrlr.o 00:02:52.230 CC lib/nvme/nvme_fabric.o 00:02:52.230 CC lib/nvme/nvme_ns_cmd.o 00:02:52.230 CC lib/nvme/nvme_ns.o 00:02:52.230 CC lib/nvme/nvme_pcie_common.o 00:02:52.230 CC lib/nvme/nvme_pcie.o 00:02:52.230 CC lib/nvme/nvme_qpair.o 00:02:52.489 CC lib/nvme/nvme.o 00:02:52.489 CC lib/nvme/nvme_quirks.o 00:02:52.489 CC lib/nvme/nvme_transport.o 00:02:52.489 CC lib/nvme/nvme_discovery.o 00:02:52.489 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.489 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.489 CC lib/nvme/nvme_tcp.o 00:02:52.489 CC lib/nvme/nvme_opal.o 00:02:52.489 CC lib/nvme/nvme_io_msg.o 00:02:52.489 CC lib/nvme/nvme_poll_group.o 00:02:52.489 CC lib/nvme/nvme_zns.o 00:02:52.489 CC lib/nvme/nvme_stubs.o 00:02:52.489 CC lib/nvme/nvme_auth.o 00:02:52.489 CC lib/nvme/nvme_cuse.o 00:02:52.489 CC lib/nvme/nvme_vfio_user.o 00:02:52.489 CC lib/nvme/nvme_rdma.o 00:02:52.748 LIB libspdk_thread.a 00:02:52.748 SO libspdk_thread.so.11.0 00:02:52.748 SYMLINK libspdk_thread.so 00:02:53.007 CC lib/blob/blobstore.o 00:02:53.007 CC lib/blob/request.o 00:02:53.007 CC lib/blob/zeroes.o 00:02:53.007 CC lib/blob/blob_bs_dev.o 00:02:53.007 CC lib/virtio/virtio.o 00:02:53.007 CC lib/virtio/virtio_vhost_user.o 00:02:53.007 CC lib/virtio/virtio_vfio_user.o 00:02:53.007 CC lib/virtio/virtio_pci.o 00:02:53.007 CC lib/accel/accel.o 00:02:53.007 CC lib/accel/accel_rpc.o 00:02:53.007 CC lib/accel/accel_sw.o 00:02:53.007 CC lib/init/json_config.o 00:02:53.007 CC lib/init/subsystem.o 00:02:53.007 CC lib/init/subsystem_rpc.o 00:02:53.007 CC lib/init/rpc.o 00:02:53.007 CC lib/vfu_tgt/tgt_endpoint.o 00:02:53.007 CC lib/vfu_tgt/tgt_rpc.o 00:02:53.007 CC lib/fsdev/fsdev.o 00:02:53.007 CC lib/fsdev/fsdev_io.o 00:02:53.007 CC lib/fsdev/fsdev_rpc.o 00:02:53.266 LIB libspdk_init.a 00:02:53.266 SO libspdk_init.so.6.0 00:02:53.266 LIB libspdk_virtio.a 00:02:53.525 SYMLINK libspdk_init.so 00:02:53.525 LIB libspdk_vfu_tgt.a 00:02:53.525 SO libspdk_virtio.so.7.0 00:02:53.525 SO libspdk_vfu_tgt.so.3.0 00:02:53.525 SYMLINK libspdk_virtio.so 00:02:53.525 SYMLINK libspdk_vfu_tgt.so 00:02:53.525 LIB libspdk_fsdev.a 00:02:53.784 SO libspdk_fsdev.so.2.0 00:02:53.784 CC lib/event/app.o 00:02:53.784 CC lib/event/reactor.o 00:02:53.784 CC lib/event/log_rpc.o 00:02:53.784 CC lib/event/app_rpc.o 00:02:53.784 CC lib/event/scheduler_static.o 00:02:53.784 SYMLINK libspdk_fsdev.so 00:02:54.043 LIB libspdk_accel.a 00:02:54.043 SO libspdk_accel.so.16.0 00:02:54.043 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:54.043 LIB libspdk_nvme.a 00:02:54.043 SYMLINK libspdk_accel.so 00:02:54.043 LIB libspdk_event.a 00:02:54.043 SO libspdk_event.so.14.0 00:02:54.043 SO libspdk_nvme.so.15.0 00:02:54.302 SYMLINK libspdk_event.so 00:02:54.302 SYMLINK libspdk_nvme.so 00:02:54.302 CC lib/bdev/bdev.o 00:02:54.302 CC lib/bdev/bdev_rpc.o 00:02:54.302 CC lib/bdev/part.o 00:02:54.302 CC lib/bdev/bdev_zone.o 00:02:54.302 CC lib/bdev/scsi_nvme.o 00:02:54.561 LIB libspdk_fuse_dispatcher.a 00:02:54.561 SO libspdk_fuse_dispatcher.so.1.0 00:02:54.561 SYMLINK libspdk_fuse_dispatcher.so 00:02:55.129 LIB libspdk_blob.a 00:02:55.389 SO libspdk_blob.so.11.0 00:02:55.389 SYMLINK libspdk_blob.so 00:02:55.648 CC lib/lvol/lvol.o 00:02:55.648 CC lib/blobfs/blobfs.o 00:02:55.648 CC lib/blobfs/tree.o 00:02:56.216 LIB libspdk_bdev.a 00:02:56.216 SO libspdk_bdev.so.17.0 00:02:56.216 LIB libspdk_blobfs.a 00:02:56.216 SYMLINK libspdk_bdev.so 00:02:56.216 LIB libspdk_lvol.a 00:02:56.216 SO libspdk_blobfs.so.10.0 00:02:56.475 SO libspdk_lvol.so.10.0 00:02:56.475 SYMLINK libspdk_blobfs.so 00:02:56.475 SYMLINK libspdk_lvol.so 00:02:56.734 CC lib/nvmf/ctrlr_discovery.o 00:02:56.734 CC lib/nvmf/ctrlr.o 00:02:56.734 CC lib/nvmf/ctrlr_bdev.o 00:02:56.734 CC lib/nvmf/subsystem.o 00:02:56.734 CC lib/nvmf/nvmf.o 00:02:56.734 CC lib/nvmf/nvmf_rpc.o 00:02:56.734 CC lib/scsi/dev.o 00:02:56.734 CC lib/nvmf/transport.o 00:02:56.734 CC lib/nvmf/tcp.o 00:02:56.734 CC lib/scsi/lun.o 00:02:56.734 CC lib/nvmf/stubs.o 00:02:56.734 CC lib/scsi/port.o 00:02:56.734 CC lib/nvmf/mdns_server.o 00:02:56.734 CC lib/scsi/scsi.o 00:02:56.734 CC lib/nvmf/vfio_user.o 00:02:56.734 CC lib/scsi/scsi_pr.o 00:02:56.734 CC lib/scsi/scsi_bdev.o 00:02:56.734 CC lib/nvmf/auth.o 00:02:56.734 CC lib/nvmf/rdma.o 00:02:56.734 CC lib/scsi/task.o 00:02:56.734 CC lib/scsi/scsi_rpc.o 00:02:56.734 CC lib/nbd/nbd.o 00:02:56.734 CC lib/nbd/nbd_rpc.o 00:02:56.734 CC lib/ublk/ublk.o 00:02:56.734 CC lib/ublk/ublk_rpc.o 00:02:56.734 CC lib/ftl/ftl_core.o 00:02:56.734 CC lib/ftl/ftl_init.o 00:02:56.734 CC lib/ftl/ftl_layout.o 00:02:56.734 CC lib/ftl/ftl_debug.o 00:02:56.734 CC lib/ftl/ftl_io.o 00:02:56.734 CC lib/ftl/ftl_sb.o 00:02:56.734 CC lib/ftl/ftl_l2p_flat.o 00:02:56.734 CC lib/ftl/ftl_l2p.o 00:02:56.734 CC lib/ftl/ftl_nv_cache.o 00:02:56.734 CC lib/ftl/ftl_band.o 00:02:56.734 CC lib/ftl/ftl_band_ops.o 00:02:56.734 CC lib/ftl/ftl_writer.o 00:02:56.734 CC lib/ftl/ftl_rq.o 00:02:56.734 CC lib/ftl/ftl_reloc.o 00:02:56.734 CC lib/ftl/ftl_l2p_cache.o 00:02:56.734 CC lib/ftl/ftl_p2l.o 00:02:56.734 CC lib/ftl/ftl_p2l_log.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.734 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.735 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.735 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.735 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.735 CC lib/ftl/utils/ftl_conf.o 00:02:56.735 CC lib/ftl/utils/ftl_md.o 00:02:56.735 CC lib/ftl/utils/ftl_mempool.o 00:02:56.735 CC lib/ftl/utils/ftl_bitmap.o 00:02:56.735 CC lib/ftl/utils/ftl_property.o 00:02:56.735 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.735 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.735 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:56.735 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:56.735 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:56.735 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.735 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:56.735 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.735 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.735 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.735 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.735 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:56.735 CC lib/ftl/base/ftl_base_dev.o 00:02:56.735 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:56.735 CC lib/ftl/base/ftl_base_bdev.o 00:02:56.735 CC lib/ftl/ftl_trace.o 00:02:57.301 LIB libspdk_scsi.a 00:02:57.301 SO libspdk_scsi.so.9.0 00:02:57.301 LIB libspdk_nbd.a 00:02:57.301 SO libspdk_nbd.so.7.0 00:02:57.301 LIB libspdk_ublk.a 00:02:57.301 SYMLINK libspdk_scsi.so 00:02:57.301 SO libspdk_ublk.so.3.0 00:02:57.301 SYMLINK libspdk_nbd.so 00:02:57.560 SYMLINK libspdk_ublk.so 00:02:57.560 LIB libspdk_ftl.a 00:02:57.560 CC lib/vhost/vhost.o 00:02:57.560 CC lib/iscsi/conn.o 00:02:57.560 CC lib/vhost/vhost_rpc.o 00:02:57.560 CC lib/vhost/vhost_scsi.o 00:02:57.560 CC lib/iscsi/init_grp.o 00:02:57.560 CC lib/vhost/vhost_blk.o 00:02:57.560 CC lib/iscsi/iscsi.o 00:02:57.560 CC lib/vhost/rte_vhost_user.o 00:02:57.560 CC lib/iscsi/param.o 00:02:57.560 CC lib/iscsi/portal_grp.o 00:02:57.560 CC lib/iscsi/tgt_node.o 00:02:57.560 CC lib/iscsi/iscsi_subsystem.o 00:02:57.560 CC lib/iscsi/iscsi_rpc.o 00:02:57.560 CC lib/iscsi/task.o 00:02:57.819 SO libspdk_ftl.so.9.0 00:02:58.077 SYMLINK libspdk_ftl.so 00:02:58.336 LIB libspdk_nvmf.a 00:02:58.336 LIB libspdk_vhost.a 00:02:58.594 SO libspdk_nvmf.so.20.0 00:02:58.594 SO libspdk_vhost.so.8.0 00:02:58.594 SYMLINK libspdk_vhost.so 00:02:58.594 SYMLINK libspdk_nvmf.so 00:02:58.594 LIB libspdk_iscsi.a 00:02:58.853 SO libspdk_iscsi.so.8.0 00:02:58.853 SYMLINK libspdk_iscsi.so 00:02:59.421 CC module/env_dpdk/env_dpdk_rpc.o 00:02:59.421 CC module/vfu_device/vfu_virtio.o 00:02:59.421 CC module/vfu_device/vfu_virtio_blk.o 00:02:59.421 CC module/vfu_device/vfu_virtio_scsi.o 00:02:59.421 CC module/vfu_device/vfu_virtio_fs.o 00:02:59.421 CC module/vfu_device/vfu_virtio_rpc.o 00:02:59.421 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.421 CC module/sock/posix/posix.o 00:02:59.421 CC module/accel/dsa/accel_dsa.o 00:02:59.421 CC module/accel/iaa/accel_iaa.o 00:02:59.421 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.421 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:59.421 CC module/fsdev/aio/fsdev_aio.o 00:02:59.421 CC module/accel/ioat/accel_ioat.o 00:02:59.421 CC module/fsdev/aio/linux_aio_mgr.o 00:02:59.421 CC module/scheduler/gscheduler/gscheduler.o 00:02:59.421 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.421 CC module/keyring/linux/keyring.o 00:02:59.421 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:59.421 LIB libspdk_env_dpdk_rpc.a 00:02:59.421 CC module/keyring/linux/keyring_rpc.o 00:02:59.421 CC module/accel/error/accel_error.o 00:02:59.421 CC module/accel/error/accel_error_rpc.o 00:02:59.421 CC module/blob/bdev/blob_bdev.o 00:02:59.421 CC module/keyring/file/keyring_rpc.o 00:02:59.421 CC module/keyring/file/keyring.o 00:02:59.421 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:59.681 SO libspdk_env_dpdk_rpc.so.6.0 00:02:59.681 SYMLINK libspdk_env_dpdk_rpc.so 00:02:59.681 LIB libspdk_keyring_linux.a 00:02:59.681 LIB libspdk_scheduler_dpdk_governor.a 00:02:59.681 LIB libspdk_keyring_file.a 00:02:59.681 LIB libspdk_scheduler_gscheduler.a 00:02:59.681 LIB libspdk_accel_ioat.a 00:02:59.681 SO libspdk_scheduler_gscheduler.so.4.0 00:02:59.681 SO libspdk_keyring_file.so.2.0 00:02:59.681 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:59.681 LIB libspdk_accel_error.a 00:02:59.681 SO libspdk_keyring_linux.so.1.0 00:02:59.681 LIB libspdk_scheduler_dynamic.a 00:02:59.681 LIB libspdk_accel_iaa.a 00:02:59.681 SO libspdk_accel_ioat.so.6.0 00:02:59.681 SO libspdk_accel_error.so.2.0 00:02:59.681 SO libspdk_scheduler_dynamic.so.4.0 00:02:59.681 SO libspdk_accel_iaa.so.3.0 00:02:59.681 SYMLINK libspdk_scheduler_gscheduler.so 00:02:59.681 SYMLINK libspdk_keyring_linux.so 00:02:59.681 SYMLINK libspdk_keyring_file.so 00:02:59.681 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:59.681 LIB libspdk_accel_dsa.a 00:02:59.681 LIB libspdk_blob_bdev.a 00:02:59.940 SYMLINK libspdk_accel_ioat.so 00:02:59.940 SYMLINK libspdk_accel_error.so 00:02:59.940 SO libspdk_accel_dsa.so.5.0 00:02:59.940 SYMLINK libspdk_scheduler_dynamic.so 00:02:59.940 SO libspdk_blob_bdev.so.11.0 00:02:59.940 SYMLINK libspdk_accel_iaa.so 00:02:59.940 SYMLINK libspdk_accel_dsa.so 00:02:59.940 LIB libspdk_vfu_device.a 00:02:59.940 SYMLINK libspdk_blob_bdev.so 00:02:59.940 SO libspdk_vfu_device.so.3.0 00:02:59.940 SYMLINK libspdk_vfu_device.so 00:02:59.940 LIB libspdk_fsdev_aio.a 00:03:00.199 SO libspdk_fsdev_aio.so.1.0 00:03:00.199 LIB libspdk_sock_posix.a 00:03:00.199 SO libspdk_sock_posix.so.6.0 00:03:00.199 SYMLINK libspdk_fsdev_aio.so 00:03:00.199 SYMLINK libspdk_sock_posix.so 00:03:00.459 CC module/bdev/error/vbdev_error.o 00:03:00.459 CC module/bdev/error/vbdev_error_rpc.o 00:03:00.459 CC module/bdev/gpt/gpt.o 00:03:00.459 CC module/bdev/lvol/vbdev_lvol.o 00:03:00.459 CC module/bdev/gpt/vbdev_gpt.o 00:03:00.459 CC module/bdev/delay/vbdev_delay.o 00:03:00.459 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:00.459 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:00.459 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.459 CC module/blobfs/bdev/blobfs_bdev.o 00:03:00.459 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:00.459 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.459 CC module/bdev/raid/bdev_raid.o 00:03:00.459 CC module/bdev/nvme/bdev_nvme.o 00:03:00.459 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.459 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.459 CC module/bdev/malloc/bdev_malloc.o 00:03:00.459 CC module/bdev/nvme/nvme_rpc.o 00:03:00.459 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.459 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.459 CC module/bdev/nvme/vbdev_opal.o 00:03:00.459 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:00.459 CC module/bdev/raid/raid1.o 00:03:00.459 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.459 CC module/bdev/raid/raid0.o 00:03:00.459 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.459 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.459 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.459 CC module/bdev/split/vbdev_split.o 00:03:00.459 CC module/bdev/null/bdev_null.o 00:03:00.459 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.459 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.459 CC module/bdev/raid/concat.o 00:03:00.459 CC module/bdev/null/bdev_null_rpc.o 00:03:00.459 CC module/bdev/ftl/bdev_ftl.o 00:03:00.459 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.459 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.459 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.459 CC module/bdev/aio/bdev_aio.o 00:03:00.459 CC module/bdev/aio/bdev_aio_rpc.o 00:03:00.459 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:00.459 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:00.718 LIB libspdk_blobfs_bdev.a 00:03:00.718 SO libspdk_blobfs_bdev.so.6.0 00:03:00.718 LIB libspdk_bdev_gpt.a 00:03:00.718 LIB libspdk_bdev_split.a 00:03:00.718 SYMLINK libspdk_blobfs_bdev.so 00:03:00.718 SO libspdk_bdev_gpt.so.6.0 00:03:00.718 LIB libspdk_bdev_null.a 00:03:00.718 LIB libspdk_bdev_error.a 00:03:00.718 SO libspdk_bdev_split.so.6.0 00:03:00.718 LIB libspdk_bdev_ftl.a 00:03:00.718 LIB libspdk_bdev_passthru.a 00:03:00.718 SO libspdk_bdev_error.so.6.0 00:03:00.718 SO libspdk_bdev_null.so.6.0 00:03:00.718 SO libspdk_bdev_ftl.so.6.0 00:03:00.718 SO libspdk_bdev_passthru.so.6.0 00:03:00.718 LIB libspdk_bdev_zone_block.a 00:03:00.718 LIB libspdk_bdev_delay.a 00:03:00.718 SYMLINK libspdk_bdev_gpt.so 00:03:00.718 SYMLINK libspdk_bdev_split.so 00:03:00.718 SYMLINK libspdk_bdev_error.so 00:03:00.718 SYMLINK libspdk_bdev_ftl.so 00:03:00.718 SYMLINK libspdk_bdev_null.so 00:03:00.718 LIB libspdk_bdev_malloc.a 00:03:00.718 LIB libspdk_bdev_aio.a 00:03:00.718 SO libspdk_bdev_delay.so.6.0 00:03:00.718 SO libspdk_bdev_zone_block.so.6.0 00:03:00.718 LIB libspdk_bdev_iscsi.a 00:03:00.718 SO libspdk_bdev_malloc.so.6.0 00:03:00.718 SYMLINK libspdk_bdev_passthru.so 00:03:00.718 SO libspdk_bdev_aio.so.6.0 00:03:00.718 SO libspdk_bdev_iscsi.so.6.0 00:03:00.718 SYMLINK libspdk_bdev_delay.so 00:03:00.718 SYMLINK libspdk_bdev_zone_block.so 00:03:00.977 SYMLINK libspdk_bdev_malloc.so 00:03:00.977 LIB libspdk_bdev_lvol.a 00:03:00.977 SYMLINK libspdk_bdev_iscsi.so 00:03:00.977 SYMLINK libspdk_bdev_aio.so 00:03:00.977 LIB libspdk_bdev_virtio.a 00:03:00.977 SO libspdk_bdev_lvol.so.6.0 00:03:00.977 SO libspdk_bdev_virtio.so.6.0 00:03:00.977 SYMLINK libspdk_bdev_lvol.so 00:03:00.977 SYMLINK libspdk_bdev_virtio.so 00:03:01.236 LIB libspdk_bdev_raid.a 00:03:01.236 SO libspdk_bdev_raid.so.6.0 00:03:01.236 SYMLINK libspdk_bdev_raid.so 00:03:02.187 LIB libspdk_bdev_nvme.a 00:03:02.187 SO libspdk_bdev_nvme.so.7.1 00:03:02.452 SYMLINK libspdk_bdev_nvme.so 00:03:03.020 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.020 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.020 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.020 CC module/event/subsystems/vmd/vmd.o 00:03:03.020 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.020 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.020 CC module/event/subsystems/keyring/keyring.o 00:03:03.020 CC module/event/subsystems/fsdev/fsdev.o 00:03:03.020 CC module/event/subsystems/sock/sock.o 00:03:03.020 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:03.020 LIB libspdk_event_keyring.a 00:03:03.280 LIB libspdk_event_vfu_tgt.a 00:03:03.280 LIB libspdk_event_vhost_blk.a 00:03:03.280 LIB libspdk_event_iobuf.a 00:03:03.280 LIB libspdk_event_fsdev.a 00:03:03.280 LIB libspdk_event_vmd.a 00:03:03.280 LIB libspdk_event_scheduler.a 00:03:03.280 LIB libspdk_event_sock.a 00:03:03.280 SO libspdk_event_keyring.so.1.0 00:03:03.280 SO libspdk_event_vhost_blk.so.3.0 00:03:03.280 SO libspdk_event_vfu_tgt.so.3.0 00:03:03.280 SO libspdk_event_iobuf.so.3.0 00:03:03.280 SO libspdk_event_fsdev.so.1.0 00:03:03.280 SO libspdk_event_scheduler.so.4.0 00:03:03.280 SO libspdk_event_vmd.so.6.0 00:03:03.280 SO libspdk_event_sock.so.5.0 00:03:03.280 SYMLINK libspdk_event_vhost_blk.so 00:03:03.280 SYMLINK libspdk_event_keyring.so 00:03:03.280 SYMLINK libspdk_event_vfu_tgt.so 00:03:03.280 SYMLINK libspdk_event_scheduler.so 00:03:03.280 SYMLINK libspdk_event_iobuf.so 00:03:03.280 SYMLINK libspdk_event_fsdev.so 00:03:03.280 SYMLINK libspdk_event_vmd.so 00:03:03.280 SYMLINK libspdk_event_sock.so 00:03:03.539 CC module/event/subsystems/accel/accel.o 00:03:03.799 LIB libspdk_event_accel.a 00:03:03.799 SO libspdk_event_accel.so.6.0 00:03:03.799 SYMLINK libspdk_event_accel.so 00:03:04.059 CC module/event/subsystems/bdev/bdev.o 00:03:04.318 LIB libspdk_event_bdev.a 00:03:04.318 SO libspdk_event_bdev.so.6.0 00:03:04.318 SYMLINK libspdk_event_bdev.so 00:03:04.578 CC module/event/subsystems/nbd/nbd.o 00:03:04.578 CC module/event/subsystems/ublk/ublk.o 00:03:04.578 CC module/event/subsystems/scsi/scsi.o 00:03:04.578 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.578 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.838 LIB libspdk_event_nbd.a 00:03:04.838 LIB libspdk_event_ublk.a 00:03:04.838 LIB libspdk_event_scsi.a 00:03:04.838 SO libspdk_event_nbd.so.6.0 00:03:04.838 SO libspdk_event_ublk.so.3.0 00:03:04.838 SO libspdk_event_scsi.so.6.0 00:03:04.838 LIB libspdk_event_nvmf.a 00:03:04.838 SYMLINK libspdk_event_nbd.so 00:03:04.838 SYMLINK libspdk_event_ublk.so 00:03:04.838 SYMLINK libspdk_event_scsi.so 00:03:04.838 SO libspdk_event_nvmf.so.6.0 00:03:05.098 SYMLINK libspdk_event_nvmf.so 00:03:05.357 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.357 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.357 LIB libspdk_event_vhost_scsi.a 00:03:05.357 LIB libspdk_event_iscsi.a 00:03:05.357 SO libspdk_event_vhost_scsi.so.3.0 00:03:05.357 SO libspdk_event_iscsi.so.6.0 00:03:05.617 SYMLINK libspdk_event_vhost_scsi.so 00:03:05.617 SYMLINK libspdk_event_iscsi.so 00:03:05.617 SO libspdk.so.6.0 00:03:05.617 SYMLINK libspdk.so 00:03:06.190 CC app/spdk_top/spdk_top.o 00:03:06.190 CC app/trace_record/trace_record.o 00:03:06.190 CC app/spdk_nvme_discover/discovery_aer.o 00:03:06.190 CC app/spdk_lspci/spdk_lspci.o 00:03:06.190 CXX app/trace/trace.o 00:03:06.190 CC app/spdk_nvme_identify/identify.o 00:03:06.190 CC test/rpc_client/rpc_client_test.o 00:03:06.190 TEST_HEADER include/spdk/accel_module.h 00:03:06.190 TEST_HEADER include/spdk/accel.h 00:03:06.190 TEST_HEADER include/spdk/assert.h 00:03:06.190 TEST_HEADER include/spdk/base64.h 00:03:06.191 TEST_HEADER include/spdk/barrier.h 00:03:06.191 TEST_HEADER include/spdk/bdev.h 00:03:06.191 TEST_HEADER include/spdk/bdev_module.h 00:03:06.191 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.191 TEST_HEADER include/spdk/bit_array.h 00:03:06.191 CC app/spdk_nvme_perf/perf.o 00:03:06.191 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.191 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.191 TEST_HEADER include/spdk/bit_pool.h 00:03:06.191 TEST_HEADER include/spdk/blobfs.h 00:03:06.191 TEST_HEADER include/spdk/conf.h 00:03:06.191 TEST_HEADER include/spdk/blob.h 00:03:06.191 TEST_HEADER include/spdk/config.h 00:03:06.191 TEST_HEADER include/spdk/cpuset.h 00:03:06.191 TEST_HEADER include/spdk/crc32.h 00:03:06.191 TEST_HEADER include/spdk/crc16.h 00:03:06.191 TEST_HEADER include/spdk/crc64.h 00:03:06.191 TEST_HEADER include/spdk/dma.h 00:03:06.191 TEST_HEADER include/spdk/dif.h 00:03:06.191 TEST_HEADER include/spdk/endian.h 00:03:06.191 TEST_HEADER include/spdk/env.h 00:03:06.191 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.191 TEST_HEADER include/spdk/event.h 00:03:06.191 TEST_HEADER include/spdk/fd_group.h 00:03:06.191 TEST_HEADER include/spdk/fd.h 00:03:06.191 TEST_HEADER include/spdk/file.h 00:03:06.191 TEST_HEADER include/spdk/ftl.h 00:03:06.191 TEST_HEADER include/spdk/fsdev.h 00:03:06.191 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:06.191 TEST_HEADER include/spdk/fsdev_module.h 00:03:06.191 TEST_HEADER include/spdk/hexlify.h 00:03:06.191 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.191 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.191 TEST_HEADER include/spdk/idxd.h 00:03:06.191 TEST_HEADER include/spdk/histogram_data.h 00:03:06.191 TEST_HEADER include/spdk/ioat.h 00:03:06.191 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.191 TEST_HEADER include/spdk/init.h 00:03:06.191 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.191 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.191 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.191 TEST_HEADER include/spdk/json.h 00:03:06.191 TEST_HEADER include/spdk/keyring.h 00:03:06.191 TEST_HEADER include/spdk/keyring_module.h 00:03:06.191 TEST_HEADER include/spdk/likely.h 00:03:06.191 TEST_HEADER include/spdk/log.h 00:03:06.191 CC app/nvmf_tgt/nvmf_main.o 00:03:06.191 CC app/spdk_dd/spdk_dd.o 00:03:06.191 TEST_HEADER include/spdk/lvol.h 00:03:06.191 TEST_HEADER include/spdk/md5.h 00:03:06.191 TEST_HEADER include/spdk/mmio.h 00:03:06.191 TEST_HEADER include/spdk/nbd.h 00:03:06.191 TEST_HEADER include/spdk/memory.h 00:03:06.191 TEST_HEADER include/spdk/net.h 00:03:06.191 TEST_HEADER include/spdk/notify.h 00:03:06.191 TEST_HEADER include/spdk/nvme.h 00:03:06.191 CC app/iscsi_tgt/iscsi_tgt.o 00:03:06.191 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.191 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.191 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.191 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.191 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.191 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.191 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.191 TEST_HEADER include/spdk/nvmf.h 00:03:06.191 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.191 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.191 TEST_HEADER include/spdk/opal.h 00:03:06.191 TEST_HEADER include/spdk/opal_spec.h 00:03:06.191 TEST_HEADER include/spdk/pci_ids.h 00:03:06.191 TEST_HEADER include/spdk/pipe.h 00:03:06.191 TEST_HEADER include/spdk/queue.h 00:03:06.191 TEST_HEADER include/spdk/reduce.h 00:03:06.191 TEST_HEADER include/spdk/rpc.h 00:03:06.191 TEST_HEADER include/spdk/scheduler.h 00:03:06.191 TEST_HEADER include/spdk/scsi.h 00:03:06.191 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.191 TEST_HEADER include/spdk/sock.h 00:03:06.191 TEST_HEADER include/spdk/stdinc.h 00:03:06.191 TEST_HEADER include/spdk/string.h 00:03:06.191 TEST_HEADER include/spdk/thread.h 00:03:06.191 TEST_HEADER include/spdk/trace.h 00:03:06.191 TEST_HEADER include/spdk/trace_parser.h 00:03:06.191 CC app/spdk_tgt/spdk_tgt.o 00:03:06.191 TEST_HEADER include/spdk/tree.h 00:03:06.191 TEST_HEADER include/spdk/ublk.h 00:03:06.191 TEST_HEADER include/spdk/util.h 00:03:06.191 TEST_HEADER include/spdk/version.h 00:03:06.191 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.191 TEST_HEADER include/spdk/uuid.h 00:03:06.191 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.191 TEST_HEADER include/spdk/vhost.h 00:03:06.191 TEST_HEADER include/spdk/vmd.h 00:03:06.191 TEST_HEADER include/spdk/xor.h 00:03:06.191 TEST_HEADER include/spdk/zipf.h 00:03:06.191 CXX test/cpp_headers/accel.o 00:03:06.191 CXX test/cpp_headers/assert.o 00:03:06.191 CXX test/cpp_headers/accel_module.o 00:03:06.191 CXX test/cpp_headers/barrier.o 00:03:06.191 CXX test/cpp_headers/base64.o 00:03:06.191 CXX test/cpp_headers/bdev_zone.o 00:03:06.191 CXX test/cpp_headers/bdev_module.o 00:03:06.191 CXX test/cpp_headers/bdev.o 00:03:06.191 CXX test/cpp_headers/bit_array.o 00:03:06.191 CXX test/cpp_headers/blob_bdev.o 00:03:06.191 CXX test/cpp_headers/bit_pool.o 00:03:06.191 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.191 CXX test/cpp_headers/blob.o 00:03:06.191 CXX test/cpp_headers/blobfs.o 00:03:06.191 CXX test/cpp_headers/cpuset.o 00:03:06.191 CXX test/cpp_headers/conf.o 00:03:06.191 CXX test/cpp_headers/config.o 00:03:06.191 CXX test/cpp_headers/crc16.o 00:03:06.191 CXX test/cpp_headers/crc32.o 00:03:06.191 CXX test/cpp_headers/crc64.o 00:03:06.191 CXX test/cpp_headers/endian.o 00:03:06.191 CXX test/cpp_headers/dif.o 00:03:06.191 CXX test/cpp_headers/dma.o 00:03:06.191 CXX test/cpp_headers/event.o 00:03:06.191 CXX test/cpp_headers/env_dpdk.o 00:03:06.191 CXX test/cpp_headers/fd_group.o 00:03:06.191 CXX test/cpp_headers/env.o 00:03:06.191 CXX test/cpp_headers/fd.o 00:03:06.191 CXX test/cpp_headers/file.o 00:03:06.191 CXX test/cpp_headers/fsdev.o 00:03:06.191 CXX test/cpp_headers/fsdev_module.o 00:03:06.191 CXX test/cpp_headers/ftl.o 00:03:06.191 CXX test/cpp_headers/gpt_spec.o 00:03:06.191 CXX test/cpp_headers/fuse_dispatcher.o 00:03:06.191 CXX test/cpp_headers/hexlify.o 00:03:06.191 CXX test/cpp_headers/idxd.o 00:03:06.191 CXX test/cpp_headers/histogram_data.o 00:03:06.191 CXX test/cpp_headers/idxd_spec.o 00:03:06.191 CXX test/cpp_headers/ioat.o 00:03:06.191 CXX test/cpp_headers/init.o 00:03:06.191 CXX test/cpp_headers/iscsi_spec.o 00:03:06.191 CXX test/cpp_headers/ioat_spec.o 00:03:06.191 CXX test/cpp_headers/jsonrpc.o 00:03:06.191 CXX test/cpp_headers/keyring.o 00:03:06.191 CXX test/cpp_headers/keyring_module.o 00:03:06.191 CXX test/cpp_headers/json.o 00:03:06.191 CXX test/cpp_headers/likely.o 00:03:06.191 CXX test/cpp_headers/lvol.o 00:03:06.191 CXX test/cpp_headers/log.o 00:03:06.191 CXX test/cpp_headers/md5.o 00:03:06.191 CXX test/cpp_headers/memory.o 00:03:06.191 CXX test/cpp_headers/mmio.o 00:03:06.191 CXX test/cpp_headers/net.o 00:03:06.191 CXX test/cpp_headers/nbd.o 00:03:06.191 CXX test/cpp_headers/notify.o 00:03:06.191 CXX test/cpp_headers/nvme_intel.o 00:03:06.191 CXX test/cpp_headers/nvme.o 00:03:06.191 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.191 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.191 CXX test/cpp_headers/nvme_spec.o 00:03:06.191 CXX test/cpp_headers/nvme_zns.o 00:03:06.191 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.191 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.191 CXX test/cpp_headers/nvmf.o 00:03:06.191 CXX test/cpp_headers/nvmf_spec.o 00:03:06.191 CXX test/cpp_headers/nvmf_transport.o 00:03:06.191 CXX test/cpp_headers/opal.o 00:03:06.191 CC test/app/stub/stub.o 00:03:06.191 CC test/env/vtophys/vtophys.o 00:03:06.191 CC examples/ioat/verify/verify.o 00:03:06.191 CC test/app/histogram_perf/histogram_perf.o 00:03:06.191 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:06.191 CC test/thread/poller_perf/poller_perf.o 00:03:06.191 CC examples/util/zipf/zipf.o 00:03:06.191 CC test/env/pci/pci_ut.o 00:03:06.191 CC examples/ioat/perf/perf.o 00:03:06.191 CC test/app/jsoncat/jsoncat.o 00:03:06.463 CC test/env/memory/memory_ut.o 00:03:06.463 CC app/fio/bdev/fio_plugin.o 00:03:06.463 CC app/fio/nvme/fio_plugin.o 00:03:06.463 CC test/app/bdev_svc/bdev_svc.o 00:03:06.463 CC test/dma/test_dma/test_dma.o 00:03:06.463 LINK spdk_lspci 00:03:06.463 LINK interrupt_tgt 00:03:06.463 LINK nvmf_tgt 00:03:06.463 LINK spdk_nvme_discover 00:03:06.730 LINK rpc_client_test 00:03:06.730 LINK iscsi_tgt 00:03:06.730 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:06.730 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:06.730 LINK spdk_tgt 00:03:06.730 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.730 LINK histogram_perf 00:03:06.730 LINK poller_perf 00:03:06.730 CXX test/cpp_headers/opal_spec.o 00:03:06.730 LINK env_dpdk_post_init 00:03:06.730 CXX test/cpp_headers/pci_ids.o 00:03:06.730 CXX test/cpp_headers/pipe.o 00:03:06.730 CXX test/cpp_headers/queue.o 00:03:06.730 CXX test/cpp_headers/rpc.o 00:03:06.730 CXX test/cpp_headers/reduce.o 00:03:06.730 CXX test/cpp_headers/scheduler.o 00:03:06.730 LINK spdk_trace_record 00:03:06.730 CXX test/cpp_headers/scsi.o 00:03:06.730 CXX test/cpp_headers/scsi_spec.o 00:03:06.730 CXX test/cpp_headers/sock.o 00:03:06.730 CXX test/cpp_headers/string.o 00:03:06.730 CXX test/cpp_headers/trace.o 00:03:06.730 CXX test/cpp_headers/thread.o 00:03:06.730 CXX test/cpp_headers/stdinc.o 00:03:06.730 CXX test/cpp_headers/trace_parser.o 00:03:06.730 CXX test/cpp_headers/tree.o 00:03:06.730 CXX test/cpp_headers/ublk.o 00:03:06.730 CXX test/cpp_headers/util.o 00:03:06.730 CXX test/cpp_headers/uuid.o 00:03:06.730 CXX test/cpp_headers/version.o 00:03:06.730 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.730 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.730 CXX test/cpp_headers/vhost.o 00:03:06.730 CXX test/cpp_headers/vmd.o 00:03:06.730 CXX test/cpp_headers/xor.o 00:03:06.730 CXX test/cpp_headers/zipf.o 00:03:06.730 LINK vtophys 00:03:06.990 LINK verify 00:03:06.990 LINK jsoncat 00:03:06.990 LINK zipf 00:03:06.990 LINK ioat_perf 00:03:06.990 LINK spdk_dd 00:03:06.990 LINK stub 00:03:06.990 LINK bdev_svc 00:03:06.990 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:06.990 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:06.990 LINK pci_ut 00:03:06.990 LINK spdk_trace 00:03:07.248 LINK nvme_fuzz 00:03:07.248 LINK test_dma 00:03:07.248 LINK spdk_nvme 00:03:07.248 LINK spdk_bdev 00:03:07.248 CC test/event/reactor/reactor.o 00:03:07.248 LINK spdk_top 00:03:07.248 CC test/event/event_perf/event_perf.o 00:03:07.248 CC examples/idxd/perf/perf.o 00:03:07.248 CC test/event/reactor_perf/reactor_perf.o 00:03:07.248 CC test/event/app_repeat/app_repeat.o 00:03:07.248 CC examples/vmd/lsvmd/lsvmd.o 00:03:07.248 CC examples/vmd/led/led.o 00:03:07.248 CC examples/sock/hello_world/hello_sock.o 00:03:07.248 CC test/event/scheduler/scheduler.o 00:03:07.248 LINK vhost_fuzz 00:03:07.248 CC examples/thread/thread/thread_ex.o 00:03:07.506 LINK mem_callbacks 00:03:07.506 LINK spdk_nvme_perf 00:03:07.506 LINK spdk_nvme_identify 00:03:07.506 LINK reactor 00:03:07.506 LINK event_perf 00:03:07.506 LINK reactor_perf 00:03:07.506 LINK lsvmd 00:03:07.506 CC app/vhost/vhost.o 00:03:07.506 LINK led 00:03:07.506 LINK app_repeat 00:03:07.506 LINK hello_sock 00:03:07.506 LINK scheduler 00:03:07.506 LINK idxd_perf 00:03:07.506 LINK thread 00:03:07.765 LINK vhost 00:03:07.765 CC test/nvme/connect_stress/connect_stress.o 00:03:07.765 CC test/nvme/startup/startup.o 00:03:07.765 CC test/nvme/e2edp/nvme_dp.o 00:03:07.765 CC test/nvme/cuse/cuse.o 00:03:07.765 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.765 CC test/nvme/reset/reset.o 00:03:07.765 CC test/nvme/aer/aer.o 00:03:07.765 CC test/nvme/fdp/fdp.o 00:03:07.765 CC test/nvme/boot_partition/boot_partition.o 00:03:07.765 CC test/nvme/overhead/overhead.o 00:03:07.765 CC test/nvme/compliance/nvme_compliance.o 00:03:07.765 CC test/nvme/sgl/sgl.o 00:03:07.765 CC test/nvme/reserve/reserve.o 00:03:07.765 CC test/nvme/err_injection/err_injection.o 00:03:07.765 LINK memory_ut 00:03:07.765 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:07.765 CC test/nvme/simple_copy/simple_copy.o 00:03:07.765 CC test/blobfs/mkfs/mkfs.o 00:03:07.765 CC test/accel/dif/dif.o 00:03:07.765 CC test/lvol/esnap/esnap.o 00:03:08.023 LINK connect_stress 00:03:08.023 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:08.023 LINK startup 00:03:08.023 CC examples/nvme/arbitration/arbitration.o 00:03:08.023 LINK boot_partition 00:03:08.023 CC examples/nvme/reconnect/reconnect.o 00:03:08.023 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:08.023 CC examples/nvme/hotplug/hotplug.o 00:03:08.023 CC examples/nvme/abort/abort.o 00:03:08.023 CC examples/nvme/hello_world/hello_world.o 00:03:08.023 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:08.023 LINK doorbell_aers 00:03:08.023 LINK err_injection 00:03:08.023 LINK fused_ordering 00:03:08.023 LINK reserve 00:03:08.023 LINK reset 00:03:08.023 LINK simple_copy 00:03:08.023 LINK sgl 00:03:08.023 LINK mkfs 00:03:08.023 LINK nvme_dp 00:03:08.023 LINK overhead 00:03:08.023 LINK fdp 00:03:08.023 LINK aer 00:03:08.023 LINK nvme_compliance 00:03:08.023 CC examples/accel/perf/accel_perf.o 00:03:08.282 LINK cmb_copy 00:03:08.282 CC examples/blob/cli/blobcli.o 00:03:08.282 CC examples/blob/hello_world/hello_blob.o 00:03:08.282 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:08.282 LINK pmr_persistence 00:03:08.282 LINK hotplug 00:03:08.282 LINK hello_world 00:03:08.282 LINK iscsi_fuzz 00:03:08.282 LINK arbitration 00:03:08.282 LINK reconnect 00:03:08.282 LINK abort 00:03:08.282 LINK dif 00:03:08.282 LINK nvme_manage 00:03:08.282 LINK hello_blob 00:03:08.542 LINK hello_fsdev 00:03:08.542 LINK accel_perf 00:03:08.542 LINK blobcli 00:03:08.799 LINK cuse 00:03:08.799 CC test/bdev/bdevio/bdevio.o 00:03:09.057 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.058 CC examples/bdev/bdevperf/bdevperf.o 00:03:09.315 LINK bdevio 00:03:09.315 LINK hello_bdev 00:03:09.575 LINK bdevperf 00:03:10.143 CC examples/nvmf/nvmf/nvmf.o 00:03:10.402 LINK nvmf 00:03:11.340 LINK esnap 00:03:11.600 00:03:11.600 real 0m55.536s 00:03:11.600 user 8m0.648s 00:03:11.600 sys 3m40.863s 00:03:11.600 09:05:12 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:11.600 09:05:12 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.600 ************************************ 00:03:11.600 END TEST make 00:03:11.600 ************************************ 00:03:11.860 09:05:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.860 09:05:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.860 09:05:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.860 09:05:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.860 09:05:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.860 09:05:12 -- pm/common@44 -- $ pid=837512 00:03:11.860 09:05:12 -- pm/common@50 -- $ kill -TERM 837512 00:03:11.860 09:05:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.860 09:05:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.860 09:05:12 -- pm/common@44 -- $ pid=837514 00:03:11.860 09:05:12 -- pm/common@50 -- $ kill -TERM 837514 00:03:11.860 09:05:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.860 09:05:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.860 09:05:12 -- pm/common@44 -- $ pid=837516 00:03:11.860 09:05:12 -- pm/common@50 -- $ kill -TERM 837516 00:03:11.860 09:05:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.860 09:05:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.860 09:05:12 -- pm/common@44 -- $ pid=837538 00:03:11.860 09:05:12 -- pm/common@50 -- $ sudo -E kill -TERM 837538 00:03:11.860 09:05:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.860 09:05:12 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:11.860 09:05:12 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:11.860 09:05:12 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:11.860 09:05:12 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:11.860 09:05:12 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:11.860 09:05:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.860 09:05:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.860 09:05:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.860 09:05:12 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.860 09:05:12 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.860 09:05:12 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.860 09:05:12 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.860 09:05:12 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.860 09:05:12 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.860 09:05:12 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.860 09:05:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.860 09:05:12 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.860 09:05:12 -- scripts/common.sh@345 -- # : 1 00:03:11.860 09:05:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.860 09:05:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.860 09:05:12 -- scripts/common.sh@365 -- # decimal 1 00:03:11.860 09:05:12 -- scripts/common.sh@353 -- # local d=1 00:03:11.860 09:05:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.860 09:05:12 -- scripts/common.sh@355 -- # echo 1 00:03:11.860 09:05:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.860 09:05:12 -- scripts/common.sh@366 -- # decimal 2 00:03:11.860 09:05:12 -- scripts/common.sh@353 -- # local d=2 00:03:11.860 09:05:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.860 09:05:12 -- scripts/common.sh@355 -- # echo 2 00:03:11.860 09:05:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.860 09:05:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.860 09:05:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.860 09:05:12 -- scripts/common.sh@368 -- # return 0 00:03:11.860 09:05:12 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.860 09:05:12 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.860 --rc genhtml_branch_coverage=1 00:03:11.860 --rc genhtml_function_coverage=1 00:03:11.860 --rc genhtml_legend=1 00:03:11.860 --rc geninfo_all_blocks=1 00:03:11.860 --rc geninfo_unexecuted_blocks=1 00:03:11.860 00:03:11.860 ' 00:03:11.860 09:05:12 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.860 --rc genhtml_branch_coverage=1 00:03:11.860 --rc genhtml_function_coverage=1 00:03:11.860 --rc genhtml_legend=1 00:03:11.860 --rc geninfo_all_blocks=1 00:03:11.860 --rc geninfo_unexecuted_blocks=1 00:03:11.860 00:03:11.860 ' 00:03:11.860 09:05:12 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:11.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.860 --rc genhtml_branch_coverage=1 00:03:11.860 --rc genhtml_function_coverage=1 00:03:11.860 --rc genhtml_legend=1 00:03:11.860 --rc geninfo_all_blocks=1 00:03:11.860 --rc geninfo_unexecuted_blocks=1 00:03:11.861 00:03:11.861 ' 00:03:11.861 09:05:12 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:11.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.861 --rc genhtml_branch_coverage=1 00:03:11.861 --rc genhtml_function_coverage=1 00:03:11.861 --rc genhtml_legend=1 00:03:11.861 --rc geninfo_all_blocks=1 00:03:11.861 --rc geninfo_unexecuted_blocks=1 00:03:11.861 00:03:11.861 ' 00:03:11.861 09:05:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.861 09:05:12 -- nvmf/common.sh@7 -- # uname -s 00:03:11.861 09:05:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.861 09:05:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.861 09:05:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.861 09:05:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.861 09:05:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.861 09:05:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.861 09:05:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.861 09:05:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.861 09:05:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.861 09:05:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.861 09:05:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:11.861 09:05:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:11.861 09:05:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.861 09:05:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.861 09:05:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.861 09:05:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.861 09:05:12 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:11.861 09:05:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:12.121 09:05:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:12.121 09:05:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:12.121 09:05:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:12.122 09:05:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.122 09:05:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.122 09:05:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.122 09:05:12 -- paths/export.sh@5 -- # export PATH 00:03:12.122 09:05:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.122 09:05:12 -- nvmf/common.sh@51 -- # : 0 00:03:12.122 09:05:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:12.122 09:05:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:12.122 09:05:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:12.122 09:05:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:12.122 09:05:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:12.122 09:05:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:12.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:12.122 09:05:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:12.122 09:05:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:12.122 09:05:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:12.122 09:05:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:12.122 09:05:12 -- spdk/autotest.sh@32 -- # uname -s 00:03:12.122 09:05:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:12.122 09:05:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:12.122 09:05:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:12.122 09:05:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:12.122 09:05:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:12.122 09:05:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:12.122 09:05:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:12.122 09:05:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:12.122 09:05:12 -- spdk/autotest.sh@48 -- # udevadm_pid=899980 00:03:12.122 09:05:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:12.122 09:05:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:12.122 09:05:12 -- pm/common@17 -- # local monitor 00:03:12.122 09:05:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.122 09:05:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.122 09:05:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.122 09:05:12 -- pm/common@21 -- # date +%s 00:03:12.122 09:05:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.122 09:05:12 -- pm/common@21 -- # date +%s 00:03:12.122 09:05:12 -- pm/common@25 -- # sleep 1 00:03:12.122 09:05:12 -- pm/common@21 -- # date +%s 00:03:12.122 09:05:12 -- pm/common@21 -- # date +%s 00:03:12.122 09:05:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732003512 00:03:12.122 09:05:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732003512 00:03:12.122 09:05:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732003512 00:03:12.122 09:05:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732003512 00:03:12.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732003512_collect-cpu-load.pm.log 00:03:12.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732003512_collect-vmstat.pm.log 00:03:12.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732003512_collect-cpu-temp.pm.log 00:03:12.122 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732003512_collect-bmc-pm.bmc.pm.log 00:03:13.060 09:05:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:13.060 09:05:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:13.060 09:05:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:13.060 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:03:13.060 09:05:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:13.060 09:05:13 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:13.060 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:03:13.060 09:05:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:13.060 09:05:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.060 09:05:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.060 09:05:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:13.060 09:05:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:13.060 09:05:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:13.060 09:05:14 -- common/autotest_common.sh@1455 -- # uname 00:03:13.060 09:05:14 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:13.060 09:05:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:13.060 09:05:14 -- common/autotest_common.sh@1475 -- # uname 00:03:13.060 09:05:14 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:13.060 09:05:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:13.060 09:05:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:13.060 lcov: LCOV version 1.15 00:03:13.060 09:05:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:25.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.298 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:40.183 09:05:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:40.183 09:05:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.183 09:05:39 -- common/autotest_common.sh@10 -- # set +x 00:03:40.183 09:05:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:40.183 09:05:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.121 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:41.121 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:41.121 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:41.381 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:41.381 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:41.381 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:41.381 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:41.381 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:41.381 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:41.381 09:05:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:41.381 09:05:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:41.381 09:05:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:41.381 09:05:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:41.381 09:05:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.381 09:05:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:41.381 09:05:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:41.381 09:05:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.381 09:05:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.381 09:05:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:41.381 09:05:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.381 09:05:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:41.381 09:05:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:41.381 09:05:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:41.381 09:05:42 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.381 No valid GPT data, bailing 00:03:41.381 09:05:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.381 09:05:42 -- scripts/common.sh@394 -- # pt= 00:03:41.381 09:05:42 -- scripts/common.sh@395 -- # return 1 00:03:41.381 09:05:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.640 1+0 records in 00:03:41.640 1+0 records out 00:03:41.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558399 s, 188 MB/s 00:03:41.640 09:05:42 -- spdk/autotest.sh@105 -- # sync 00:03:41.640 09:05:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.640 09:05:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.640 09:05:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.916 09:05:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:46.916 09:05:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:46.916 09:05:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:46.916 09:05:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:50.222 Hugepages 00:03:50.222 node hugesize free / total 00:03:50.222 node0 1048576kB 0 / 0 00:03:50.222 node0 2048kB 1024 / 1024 00:03:50.222 node1 1048576kB 0 / 0 00:03:50.222 node1 2048kB 1024 / 1024 00:03:50.222 00:03:50.222 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.222 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:50.222 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:50.222 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:50.222 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:50.222 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:50.222 09:05:50 -- spdk/autotest.sh@117 -- # uname -s 00:03:50.222 09:05:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:50.222 09:05:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:50.222 09:05:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.761 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:52.761 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:53.020 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:53.020 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:53.020 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:53.020 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:53.020 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:53.588 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.847 09:05:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:54.785 09:05:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:54.785 09:05:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:54.785 09:05:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:54.785 09:05:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:54.785 09:05:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:54.785 09:05:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:54.785 09:05:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.785 09:05:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.785 09:05:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:55.044 09:05:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:55.044 09:05:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:55.044 09:05:55 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.579 Waiting for block devices as requested 00:03:57.579 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:57.838 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:57.838 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:57.838 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:58.097 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:58.097 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:58.097 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:58.356 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:58.356 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:58.356 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:58.614 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:58.614 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:58.614 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:58.614 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:58.873 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:58.873 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:58.873 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:59.132 09:05:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:59.132 09:05:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:59.132 09:05:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:59.132 09:05:59 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:59.132 09:05:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:59.132 09:05:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:59.132 09:05:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:59.132 09:06:00 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:59.132 09:06:00 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:59.132 09:06:00 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:59.132 09:06:00 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:59.132 09:06:00 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:59.132 09:06:00 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:59.132 09:06:00 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:59.132 09:06:00 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:59.132 09:06:00 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:59.132 09:06:00 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:59.132 09:06:00 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:59.132 09:06:00 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:59.132 09:06:00 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:59.132 09:06:00 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:59.132 09:06:00 -- common/autotest_common.sh@1541 -- # continue 00:03:59.132 09:06:00 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:59.132 09:06:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.132 09:06:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.132 09:06:00 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:59.132 09:06:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.132 09:06:00 -- common/autotest_common.sh@10 -- # set +x 00:03:59.132 09:06:00 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.425 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:02.425 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.995 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.995 09:06:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:02.995 09:06:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.995 09:06:04 -- common/autotest_common.sh@10 -- # set +x 00:04:02.995 09:06:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:02.995 09:06:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:02.995 09:06:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:02.995 09:06:04 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:02.995 09:06:04 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:02.995 09:06:04 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:02.995 09:06:04 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:02.995 09:06:04 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:02.995 09:06:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:02.995 09:06:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:02.995 09:06:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.995 09:06:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.995 09:06:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.254 09:06:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:03.254 09:06:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:03.254 09:06:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:03.254 09:06:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:03.254 09:06:04 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:03.254 09:06:04 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:03.254 09:06:04 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:03.254 09:06:04 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:03.254 09:06:04 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:03.254 09:06:04 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:03.254 09:06:04 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=914544 00:04:03.254 09:06:04 -- common/autotest_common.sh@1583 -- # waitforlisten 914544 00:04:03.254 09:06:04 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.254 09:06:04 -- common/autotest_common.sh@833 -- # '[' -z 914544 ']' 00:04:03.254 09:06:04 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.254 09:06:04 -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.254 09:06:04 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.254 09:06:04 -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.254 09:06:04 -- common/autotest_common.sh@10 -- # set +x 00:04:03.254 [2024-11-19 09:06:04.190682] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:03.254 [2024-11-19 09:06:04.190739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid914544 ] 00:04:03.254 [2024-11-19 09:06:04.266505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.254 [2024-11-19 09:06:04.309014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.513 09:06:04 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:03.513 09:06:04 -- common/autotest_common.sh@866 -- # return 0 00:04:03.513 09:06:04 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:03.514 09:06:04 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:03.514 09:06:04 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:06.805 nvme0n1 00:04:06.805 09:06:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:06.805 [2024-11-19 09:06:07.729922] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:06.805 request: 00:04:06.805 { 00:04:06.805 "nvme_ctrlr_name": "nvme0", 00:04:06.805 "password": "test", 00:04:06.805 "method": "bdev_nvme_opal_revert", 00:04:06.805 "req_id": 1 00:04:06.805 } 00:04:06.805 Got JSON-RPC error response 00:04:06.805 response: 00:04:06.805 { 00:04:06.805 "code": -32602, 00:04:06.805 "message": "Invalid parameters" 00:04:06.805 } 00:04:06.805 09:06:07 -- common/autotest_common.sh@1589 -- # true 00:04:06.805 09:06:07 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:06.805 09:06:07 -- common/autotest_common.sh@1593 -- # killprocess 914544 00:04:06.805 09:06:07 -- common/autotest_common.sh@952 -- # '[' -z 914544 ']' 00:04:06.805 09:06:07 -- common/autotest_common.sh@956 -- # kill -0 914544 00:04:06.805 09:06:07 -- common/autotest_common.sh@957 -- # uname 00:04:06.805 09:06:07 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:06.805 09:06:07 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 914544 00:04:06.805 09:06:07 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:06.805 09:06:07 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:06.805 09:06:07 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 914544' 00:04:06.805 killing process with pid 914544 00:04:06.805 09:06:07 -- common/autotest_common.sh@971 -- # kill 914544 00:04:06.805 09:06:07 -- common/autotest_common.sh@976 -- # wait 914544 00:04:08.711 09:06:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.711 09:06:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.711 09:06:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.711 09:06:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.711 09:06:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.711 09:06:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.711 09:06:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.711 09:06:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.711 09:06:09 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:08.711 09:06:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.711 09:06:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.711 09:06:09 -- common/autotest_common.sh@10 -- # set +x 00:04:08.711 ************************************ 00:04:08.711 START TEST env 00:04:08.711 ************************************ 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:08.711 * Looking for test storage... 00:04:08.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.711 09:06:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.711 09:06:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.711 09:06:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.711 09:06:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.711 09:06:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.711 09:06:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.711 09:06:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.711 09:06:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.711 09:06:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.711 09:06:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.711 09:06:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.711 09:06:09 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.711 09:06:09 env -- scripts/common.sh@345 -- # : 1 00:04:08.711 09:06:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.711 09:06:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.711 09:06:09 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.711 09:06:09 env -- scripts/common.sh@353 -- # local d=1 00:04:08.711 09:06:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.711 09:06:09 env -- scripts/common.sh@355 -- # echo 1 00:04:08.711 09:06:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.711 09:06:09 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.711 09:06:09 env -- scripts/common.sh@353 -- # local d=2 00:04:08.711 09:06:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.711 09:06:09 env -- scripts/common.sh@355 -- # echo 2 00:04:08.711 09:06:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.711 09:06:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.711 09:06:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.711 09:06:09 env -- scripts/common.sh@368 -- # return 0 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.711 --rc genhtml_branch_coverage=1 00:04:08.711 --rc genhtml_function_coverage=1 00:04:08.711 --rc genhtml_legend=1 00:04:08.711 --rc geninfo_all_blocks=1 00:04:08.711 --rc geninfo_unexecuted_blocks=1 00:04:08.711 00:04:08.711 ' 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.711 --rc genhtml_branch_coverage=1 00:04:08.711 --rc genhtml_function_coverage=1 00:04:08.711 --rc genhtml_legend=1 00:04:08.711 --rc geninfo_all_blocks=1 00:04:08.711 --rc geninfo_unexecuted_blocks=1 00:04:08.711 00:04:08.711 ' 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.711 --rc genhtml_branch_coverage=1 00:04:08.711 --rc genhtml_function_coverage=1 00:04:08.711 --rc genhtml_legend=1 00:04:08.711 --rc geninfo_all_blocks=1 00:04:08.711 --rc geninfo_unexecuted_blocks=1 00:04:08.711 00:04:08.711 ' 00:04:08.711 09:06:09 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.712 --rc genhtml_branch_coverage=1 00:04:08.712 --rc genhtml_function_coverage=1 00:04:08.712 --rc genhtml_legend=1 00:04:08.712 --rc geninfo_all_blocks=1 00:04:08.712 --rc geninfo_unexecuted_blocks=1 00:04:08.712 00:04:08.712 ' 00:04:08.712 09:06:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:08.712 09:06:09 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.712 09:06:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.712 09:06:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.712 ************************************ 00:04:08.712 START TEST env_memory 00:04:08.712 ************************************ 00:04:08.712 09:06:09 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:08.712 00:04:08.712 00:04:08.712 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.712 http://cunit.sourceforge.net/ 00:04:08.712 00:04:08.712 00:04:08.712 Suite: memory 00:04:08.712 Test: alloc and free memory map ...[2024-11-19 09:06:09.668574] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.712 passed 00:04:08.712 Test: mem map translation ...[2024-11-19 09:06:09.689054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.712 [2024-11-19 09:06:09.689070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.712 [2024-11-19 09:06:09.689104] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.712 [2024-11-19 09:06:09.689111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.712 passed 00:04:08.712 Test: mem map registration ...[2024-11-19 09:06:09.728322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.712 [2024-11-19 09:06:09.728340] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.712 passed 00:04:08.972 Test: mem map adjacent registrations ...passed 00:04:08.972 00:04:08.972 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.972 suites 1 1 n/a 0 0 00:04:08.972 tests 4 4 4 0 0 00:04:08.972 asserts 152 152 152 0 n/a 00:04:08.972 00:04:08.972 Elapsed time = 0.140 seconds 00:04:08.972 00:04:08.972 real 0m0.149s 00:04:08.972 user 0m0.144s 00:04:08.972 sys 0m0.004s 00:04:08.972 09:06:09 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:08.972 09:06:09 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.972 ************************************ 00:04:08.972 END TEST env_memory 00:04:08.972 ************************************ 00:04:08.972 09:06:09 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:08.972 09:06:09 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.972 09:06:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.972 09:06:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.972 ************************************ 00:04:08.972 START TEST env_vtophys 00:04:08.972 ************************************ 00:04:08.972 09:06:09 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:08.972 EAL: lib.eal log level changed from notice to debug 00:04:08.972 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.972 EAL: Detected lcore 1 as core 1 on socket 0 00:04:08.972 EAL: Detected lcore 2 as core 2 on socket 0 00:04:08.972 EAL: Detected lcore 3 as core 3 on socket 0 00:04:08.972 EAL: Detected lcore 4 as core 4 on socket 0 00:04:08.972 EAL: Detected lcore 5 as core 5 on socket 0 00:04:08.972 EAL: Detected lcore 6 as core 6 on socket 0 00:04:08.972 EAL: Detected lcore 7 as core 8 on socket 0 00:04:08.972 EAL: Detected lcore 8 as core 9 on socket 0 00:04:08.972 EAL: Detected lcore 9 as core 10 on socket 0 00:04:08.972 EAL: Detected lcore 10 as core 11 on socket 0 00:04:08.972 EAL: Detected lcore 11 as core 12 on socket 0 00:04:08.972 EAL: Detected lcore 12 as core 13 on socket 0 00:04:08.972 EAL: Detected lcore 13 as core 16 on socket 0 00:04:08.972 EAL: Detected lcore 14 as core 17 on socket 0 00:04:08.972 EAL: Detected lcore 15 as core 18 on socket 0 00:04:08.972 EAL: Detected lcore 16 as core 19 on socket 0 00:04:08.972 EAL: Detected lcore 17 as core 20 on socket 0 00:04:08.972 EAL: Detected lcore 18 as core 21 on socket 0 00:04:08.972 EAL: Detected lcore 19 as core 25 on socket 0 00:04:08.972 EAL: Detected lcore 20 as core 26 on socket 0 00:04:08.972 EAL: Detected lcore 21 as core 27 on socket 0 00:04:08.972 EAL: Detected lcore 22 as core 28 on socket 0 00:04:08.972 EAL: Detected lcore 23 as core 29 on socket 0 00:04:08.972 EAL: Detected lcore 24 as core 0 on socket 1 00:04:08.972 EAL: Detected lcore 25 as core 1 on socket 1 00:04:08.972 EAL: Detected lcore 26 as core 2 on socket 1 00:04:08.972 EAL: Detected lcore 27 as core 3 on socket 1 00:04:08.972 EAL: Detected lcore 28 as core 4 on socket 1 00:04:08.972 EAL: Detected lcore 29 as core 5 on socket 1 00:04:08.972 EAL: Detected lcore 30 as core 6 on socket 1 00:04:08.972 EAL: Detected lcore 31 as core 9 on socket 1 00:04:08.972 EAL: Detected lcore 32 as core 10 on socket 1 00:04:08.972 EAL: Detected lcore 33 as core 11 on socket 1 00:04:08.972 EAL: Detected lcore 34 as core 12 on socket 1 00:04:08.972 EAL: Detected lcore 35 as core 13 on socket 1 00:04:08.972 EAL: Detected lcore 36 as core 16 on socket 1 00:04:08.972 EAL: Detected lcore 37 as core 17 on socket 1 00:04:08.972 EAL: Detected lcore 38 as core 18 on socket 1 00:04:08.972 EAL: Detected lcore 39 as core 19 on socket 1 00:04:08.972 EAL: Detected lcore 40 as core 20 on socket 1 00:04:08.972 EAL: Detected lcore 41 as core 21 on socket 1 00:04:08.972 EAL: Detected lcore 42 as core 24 on socket 1 00:04:08.972 EAL: Detected lcore 43 as core 25 on socket 1 00:04:08.972 EAL: Detected lcore 44 as core 26 on socket 1 00:04:08.972 EAL: Detected lcore 45 as core 27 on socket 1 00:04:08.972 EAL: Detected lcore 46 as core 28 on socket 1 00:04:08.972 EAL: Detected lcore 47 as core 29 on socket 1 00:04:08.972 EAL: Detected lcore 48 as core 0 on socket 0 00:04:08.972 EAL: Detected lcore 49 as core 1 on socket 0 00:04:08.972 EAL: Detected lcore 50 as core 2 on socket 0 00:04:08.973 EAL: Detected lcore 51 as core 3 on socket 0 00:04:08.973 EAL: Detected lcore 52 as core 4 on socket 0 00:04:08.973 EAL: Detected lcore 53 as core 5 on socket 0 00:04:08.973 EAL: Detected lcore 54 as core 6 on socket 0 00:04:08.973 EAL: Detected lcore 55 as core 8 on socket 0 00:04:08.973 EAL: Detected lcore 56 as core 9 on socket 0 00:04:08.973 EAL: Detected lcore 57 as core 10 on socket 0 00:04:08.973 EAL: Detected lcore 58 as core 11 on socket 0 00:04:08.973 EAL: Detected lcore 59 as core 12 on socket 0 00:04:08.973 EAL: Detected lcore 60 as core 13 on socket 0 00:04:08.973 EAL: Detected lcore 61 as core 16 on socket 0 00:04:08.973 EAL: Detected lcore 62 as core 17 on socket 0 00:04:08.973 EAL: Detected lcore 63 as core 18 on socket 0 00:04:08.973 EAL: Detected lcore 64 as core 19 on socket 0 00:04:08.973 EAL: Detected lcore 65 as core 20 on socket 0 00:04:08.973 EAL: Detected lcore 66 as core 21 on socket 0 00:04:08.973 EAL: Detected lcore 67 as core 25 on socket 0 00:04:08.973 EAL: Detected lcore 68 as core 26 on socket 0 00:04:08.973 EAL: Detected lcore 69 as core 27 on socket 0 00:04:08.973 EAL: Detected lcore 70 as core 28 on socket 0 00:04:08.973 EAL: Detected lcore 71 as core 29 on socket 0 00:04:08.973 EAL: Detected lcore 72 as core 0 on socket 1 00:04:08.973 EAL: Detected lcore 73 as core 1 on socket 1 00:04:08.973 EAL: Detected lcore 74 as core 2 on socket 1 00:04:08.973 EAL: Detected lcore 75 as core 3 on socket 1 00:04:08.973 EAL: Detected lcore 76 as core 4 on socket 1 00:04:08.973 EAL: Detected lcore 77 as core 5 on socket 1 00:04:08.973 EAL: Detected lcore 78 as core 6 on socket 1 00:04:08.973 EAL: Detected lcore 79 as core 9 on socket 1 00:04:08.973 EAL: Detected lcore 80 as core 10 on socket 1 00:04:08.973 EAL: Detected lcore 81 as core 11 on socket 1 00:04:08.973 EAL: Detected lcore 82 as core 12 on socket 1 00:04:08.973 EAL: Detected lcore 83 as core 13 on socket 1 00:04:08.973 EAL: Detected lcore 84 as core 16 on socket 1 00:04:08.973 EAL: Detected lcore 85 as core 17 on socket 1 00:04:08.973 EAL: Detected lcore 86 as core 18 on socket 1 00:04:08.973 EAL: Detected lcore 87 as core 19 on socket 1 00:04:08.973 EAL: Detected lcore 88 as core 20 on socket 1 00:04:08.973 EAL: Detected lcore 89 as core 21 on socket 1 00:04:08.973 EAL: Detected lcore 90 as core 24 on socket 1 00:04:08.973 EAL: Detected lcore 91 as core 25 on socket 1 00:04:08.973 EAL: Detected lcore 92 as core 26 on socket 1 00:04:08.973 EAL: Detected lcore 93 as core 27 on socket 1 00:04:08.973 EAL: Detected lcore 94 as core 28 on socket 1 00:04:08.973 EAL: Detected lcore 95 as core 29 on socket 1 00:04:08.973 EAL: Maximum logical cores by configuration: 128 00:04:08.973 EAL: Detected CPU lcores: 96 00:04:08.973 EAL: Detected NUMA nodes: 2 00:04:08.973 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.973 EAL: Detected shared linkage of DPDK 00:04:08.973 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.973 EAL: Bus pci wants IOVA as 'DC' 00:04:08.973 EAL: Buses did not request a specific IOVA mode. 00:04:08.973 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:08.973 EAL: Selected IOVA mode 'VA' 00:04:08.973 EAL: Probing VFIO support... 00:04:08.973 EAL: IOMMU type 1 (Type 1) is supported 00:04:08.973 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:08.973 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:08.973 EAL: VFIO support initialized 00:04:08.973 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.973 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.973 EAL: Setting up physically contiguous memory... 00:04:08.973 EAL: Setting maximum number of open files to 524288 00:04:08.973 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.973 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:08.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:08.973 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.973 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:08.973 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.973 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.973 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:08.973 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:08.973 EAL: Hugepages will be freed exactly as allocated. 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: TSC frequency is ~2300000 KHz 00:04:08.973 EAL: Main lcore 0 is ready (tid=7f22f9634a00;cpuset=[0]) 00:04:08.973 EAL: Trying to obtain current memory policy. 00:04:08.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.973 EAL: Restoring previous memory policy: 0 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.973 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.973 00:04:08.973 00:04:08.973 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.973 http://cunit.sourceforge.net/ 00:04:08.973 00:04:08.973 00:04:08.973 Suite: components_suite 00:04:08.973 Test: vtophys_malloc_test ...passed 00:04:08.973 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:08.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.973 EAL: Restoring previous memory policy: 4 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was expanded by 4MB 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was shrunk by 4MB 00:04:08.973 EAL: Trying to obtain current memory policy. 00:04:08.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.973 EAL: Restoring previous memory policy: 4 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was expanded by 6MB 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was shrunk by 6MB 00:04:08.973 EAL: Trying to obtain current memory policy. 00:04:08.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.973 EAL: Restoring previous memory policy: 4 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was expanded by 10MB 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was shrunk by 10MB 00:04:08.973 EAL: Trying to obtain current memory policy. 00:04:08.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.973 EAL: Restoring previous memory policy: 4 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was expanded by 18MB 00:04:08.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.973 EAL: request: mp_malloc_sync 00:04:08.973 EAL: No shared files mode enabled, IPC is disabled 00:04:08.973 EAL: Heap on socket 0 was shrunk by 18MB 00:04:08.973 EAL: Trying to obtain current memory policy. 00:04:08.973 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.973 EAL: Restoring previous memory policy: 4 00:04:08.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.974 EAL: request: mp_malloc_sync 00:04:08.974 EAL: No shared files mode enabled, IPC is disabled 00:04:08.974 EAL: Heap on socket 0 was expanded by 34MB 00:04:08.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.974 EAL: request: mp_malloc_sync 00:04:08.974 EAL: No shared files mode enabled, IPC is disabled 00:04:08.974 EAL: Heap on socket 0 was shrunk by 34MB 00:04:08.974 EAL: Trying to obtain current memory policy. 00:04:08.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.974 EAL: Restoring previous memory policy: 4 00:04:08.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.974 EAL: request: mp_malloc_sync 00:04:08.974 EAL: No shared files mode enabled, IPC is disabled 00:04:08.974 EAL: Heap on socket 0 was expanded by 66MB 00:04:08.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.974 EAL: request: mp_malloc_sync 00:04:08.974 EAL: No shared files mode enabled, IPC is disabled 00:04:08.974 EAL: Heap on socket 0 was shrunk by 66MB 00:04:08.974 EAL: Trying to obtain current memory policy. 00:04:08.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.974 EAL: Restoring previous memory policy: 4 00:04:08.974 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.974 EAL: request: mp_malloc_sync 00:04:08.974 EAL: No shared files mode enabled, IPC is disabled 00:04:08.974 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.233 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.233 EAL: request: mp_malloc_sync 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.233 EAL: Trying to obtain current memory policy. 00:04:09.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.233 EAL: Restoring previous memory policy: 4 00:04:09.233 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.233 EAL: request: mp_malloc_sync 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: Heap on socket 0 was expanded by 258MB 00:04:09.233 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.233 EAL: request: mp_malloc_sync 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: Heap on socket 0 was shrunk by 258MB 00:04:09.233 EAL: Trying to obtain current memory policy. 00:04:09.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.233 EAL: Restoring previous memory policy: 4 00:04:09.233 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.233 EAL: request: mp_malloc_sync 00:04:09.233 EAL: No shared files mode enabled, IPC is disabled 00:04:09.233 EAL: Heap on socket 0 was expanded by 514MB 00:04:09.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.493 EAL: request: mp_malloc_sync 00:04:09.493 EAL: No shared files mode enabled, IPC is disabled 00:04:09.493 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.493 EAL: Trying to obtain current memory policy. 00:04:09.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.754 EAL: Restoring previous memory policy: 4 00:04:09.754 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.754 EAL: request: mp_malloc_sync 00:04:09.754 EAL: No shared files mode enabled, IPC is disabled 00:04:09.754 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.754 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.015 EAL: request: mp_malloc_sync 00:04:10.015 EAL: No shared files mode enabled, IPC is disabled 00:04:10.015 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.015 passed 00:04:10.015 00:04:10.015 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.015 suites 1 1 n/a 0 0 00:04:10.015 tests 2 2 2 0 0 00:04:10.015 asserts 497 497 497 0 n/a 00:04:10.015 00:04:10.015 Elapsed time = 0.986 seconds 00:04:10.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.015 EAL: request: mp_malloc_sync 00:04:10.015 EAL: No shared files mode enabled, IPC is disabled 00:04:10.015 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.015 EAL: No shared files mode enabled, IPC is disabled 00:04:10.015 EAL: No shared files mode enabled, IPC is disabled 00:04:10.015 EAL: No shared files mode enabled, IPC is disabled 00:04:10.015 00:04:10.015 real 0m1.116s 00:04:10.015 user 0m0.657s 00:04:10.015 sys 0m0.434s 00:04:10.015 09:06:10 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.015 09:06:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.015 ************************************ 00:04:10.015 END TEST env_vtophys 00:04:10.015 ************************************ 00:04:10.015 09:06:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.015 09:06:10 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.015 09:06:10 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.015 09:06:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.015 ************************************ 00:04:10.015 START TEST env_pci 00:04:10.015 ************************************ 00:04:10.015 09:06:11 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:10.015 00:04:10.015 00:04:10.015 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.015 http://cunit.sourceforge.net/ 00:04:10.015 00:04:10.015 00:04:10.015 Suite: pci 00:04:10.015 Test: pci_hook ...[2024-11-19 09:06:11.047451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 916199 has claimed it 00:04:10.275 EAL: Cannot find device (10000:00:01.0) 00:04:10.275 EAL: Failed to attach device on primary process 00:04:10.275 passed 00:04:10.275 00:04:10.275 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.275 suites 1 1 n/a 0 0 00:04:10.275 tests 1 1 1 0 0 00:04:10.275 asserts 25 25 25 0 n/a 00:04:10.275 00:04:10.275 Elapsed time = 0.026 seconds 00:04:10.275 00:04:10.275 real 0m0.046s 00:04:10.275 user 0m0.015s 00:04:10.275 sys 0m0.031s 00:04:10.275 09:06:11 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.275 09:06:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.275 ************************************ 00:04:10.275 END TEST env_pci 00:04:10.275 ************************************ 00:04:10.275 09:06:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.275 09:06:11 env -- env/env.sh@15 -- # uname 00:04:10.275 09:06:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.275 09:06:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.275 09:06:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.275 09:06:11 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:10.275 09:06:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.275 09:06:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.275 ************************************ 00:04:10.275 START TEST env_dpdk_post_init 00:04:10.275 ************************************ 00:04:10.275 09:06:11 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.275 EAL: Detected CPU lcores: 96 00:04:10.275 EAL: Detected NUMA nodes: 2 00:04:10.275 EAL: Detected shared linkage of DPDK 00:04:10.275 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.275 EAL: Selected IOVA mode 'VA' 00:04:10.275 EAL: VFIO support initialized 00:04:10.275 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.275 EAL: Using IOMMU type 1 (Type 1) 00:04:10.275 EAL: Ignore mapping IO port bar(1) 00:04:10.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:10.275 EAL: Ignore mapping IO port bar(1) 00:04:10.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:10.275 EAL: Ignore mapping IO port bar(1) 00:04:10.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:10.275 EAL: Ignore mapping IO port bar(1) 00:04:10.275 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:10.534 EAL: Ignore mapping IO port bar(1) 00:04:10.534 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:10.534 EAL: Ignore mapping IO port bar(1) 00:04:10.534 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:10.534 EAL: Ignore mapping IO port bar(1) 00:04:10.534 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:10.534 EAL: Ignore mapping IO port bar(1) 00:04:10.534 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:11.103 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:11.103 EAL: Ignore mapping IO port bar(1) 00:04:11.103 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:11.103 EAL: Ignore mapping IO port bar(1) 00:04:11.103 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:11.103 EAL: Ignore mapping IO port bar(1) 00:04:11.103 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:11.103 EAL: Ignore mapping IO port bar(1) 00:04:11.103 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:11.362 EAL: Ignore mapping IO port bar(1) 00:04:11.362 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:11.362 EAL: Ignore mapping IO port bar(1) 00:04:11.362 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:11.362 EAL: Ignore mapping IO port bar(1) 00:04:11.362 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:11.362 EAL: Ignore mapping IO port bar(1) 00:04:11.362 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:14.650 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:14.650 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:14.650 Starting DPDK initialization... 00:04:14.650 Starting SPDK post initialization... 00:04:14.650 SPDK NVMe probe 00:04:14.650 Attaching to 0000:5e:00.0 00:04:14.650 Attached to 0000:5e:00.0 00:04:14.650 Cleaning up... 00:04:14.650 00:04:14.650 real 0m4.341s 00:04:14.650 user 0m2.950s 00:04:14.650 sys 0m0.466s 00:04:14.650 09:06:15 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.650 09:06:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.650 ************************************ 00:04:14.650 END TEST env_dpdk_post_init 00:04:14.650 ************************************ 00:04:14.650 09:06:15 env -- env/env.sh@26 -- # uname 00:04:14.650 09:06:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.650 09:06:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.650 09:06:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.650 09:06:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.650 09:06:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.650 ************************************ 00:04:14.650 START TEST env_mem_callbacks 00:04:14.650 ************************************ 00:04:14.650 09:06:15 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.650 EAL: Detected CPU lcores: 96 00:04:14.650 EAL: Detected NUMA nodes: 2 00:04:14.650 EAL: Detected shared linkage of DPDK 00:04:14.650 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.650 EAL: Selected IOVA mode 'VA' 00:04:14.650 EAL: VFIO support initialized 00:04:14.650 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.650 00:04:14.650 00:04:14.650 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.650 http://cunit.sourceforge.net/ 00:04:14.650 00:04:14.650 00:04:14.651 Suite: memory 00:04:14.651 Test: test ... 00:04:14.651 register 0x200000200000 2097152 00:04:14.651 malloc 3145728 00:04:14.651 register 0x200000400000 4194304 00:04:14.651 buf 0x200000500000 len 3145728 PASSED 00:04:14.651 malloc 64 00:04:14.651 buf 0x2000004fff40 len 64 PASSED 00:04:14.651 malloc 4194304 00:04:14.651 register 0x200000800000 6291456 00:04:14.651 buf 0x200000a00000 len 4194304 PASSED 00:04:14.651 free 0x200000500000 3145728 00:04:14.651 free 0x2000004fff40 64 00:04:14.651 unregister 0x200000400000 4194304 PASSED 00:04:14.651 free 0x200000a00000 4194304 00:04:14.651 unregister 0x200000800000 6291456 PASSED 00:04:14.651 malloc 8388608 00:04:14.651 register 0x200000400000 10485760 00:04:14.651 buf 0x200000600000 len 8388608 PASSED 00:04:14.651 free 0x200000600000 8388608 00:04:14.651 unregister 0x200000400000 10485760 PASSED 00:04:14.651 passed 00:04:14.651 00:04:14.651 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.651 suites 1 1 n/a 0 0 00:04:14.651 tests 1 1 1 0 0 00:04:14.651 asserts 15 15 15 0 n/a 00:04:14.651 00:04:14.651 Elapsed time = 0.008 seconds 00:04:14.651 00:04:14.651 real 0m0.059s 00:04:14.651 user 0m0.022s 00:04:14.651 sys 0m0.037s 00:04:14.651 09:06:15 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.651 09:06:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.651 ************************************ 00:04:14.651 END TEST env_mem_callbacks 00:04:14.651 ************************************ 00:04:14.651 00:04:14.651 real 0m6.237s 00:04:14.651 user 0m4.027s 00:04:14.651 sys 0m1.294s 00:04:14.651 09:06:15 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:14.651 09:06:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.651 ************************************ 00:04:14.651 END TEST env 00:04:14.651 ************************************ 00:04:14.651 09:06:15 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.651 09:06:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:14.651 09:06:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:14.651 09:06:15 -- common/autotest_common.sh@10 -- # set +x 00:04:14.910 ************************************ 00:04:14.910 START TEST rpc 00:04:14.910 ************************************ 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.910 * Looking for test storage... 00:04:14.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.910 09:06:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.910 09:06:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.910 09:06:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.910 09:06:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.910 09:06:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.910 09:06:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.910 09:06:15 rpc -- scripts/common.sh@345 -- # : 1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.910 09:06:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.910 09:06:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.910 09:06:15 rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.910 09:06:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.910 09:06:15 rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.910 09:06:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.910 09:06:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.910 09:06:15 rpc -- scripts/common.sh@368 -- # return 0 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:14.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.910 --rc genhtml_branch_coverage=1 00:04:14.910 --rc genhtml_function_coverage=1 00:04:14.910 --rc genhtml_legend=1 00:04:14.910 --rc geninfo_all_blocks=1 00:04:14.910 --rc geninfo_unexecuted_blocks=1 00:04:14.910 00:04:14.910 ' 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:14.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.910 --rc genhtml_branch_coverage=1 00:04:14.910 --rc genhtml_function_coverage=1 00:04:14.910 --rc genhtml_legend=1 00:04:14.910 --rc geninfo_all_blocks=1 00:04:14.910 --rc geninfo_unexecuted_blocks=1 00:04:14.910 00:04:14.910 ' 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:14.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.910 --rc genhtml_branch_coverage=1 00:04:14.910 --rc genhtml_function_coverage=1 00:04:14.910 --rc genhtml_legend=1 00:04:14.910 --rc geninfo_all_blocks=1 00:04:14.910 --rc geninfo_unexecuted_blocks=1 00:04:14.910 00:04:14.910 ' 00:04:14.910 09:06:15 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:14.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.910 --rc genhtml_branch_coverage=1 00:04:14.910 --rc genhtml_function_coverage=1 00:04:14.910 --rc genhtml_legend=1 00:04:14.910 --rc geninfo_all_blocks=1 00:04:14.910 --rc geninfo_unexecuted_blocks=1 00:04:14.910 00:04:14.910 ' 00:04:14.910 09:06:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=917084 00:04:14.910 09:06:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.911 09:06:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:14.911 09:06:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 917084 00:04:14.911 09:06:15 rpc -- common/autotest_common.sh@833 -- # '[' -z 917084 ']' 00:04:14.911 09:06:15 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.911 09:06:15 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:14.911 09:06:15 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.911 09:06:15 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:14.911 09:06:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.911 [2024-11-19 09:06:15.963398] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:14.911 [2024-11-19 09:06:15.963447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917084 ] 00:04:15.171 [2024-11-19 09:06:16.036688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.171 [2024-11-19 09:06:16.076317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.171 [2024-11-19 09:06:16.076355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 917084' to capture a snapshot of events at runtime. 00:04:15.171 [2024-11-19 09:06:16.076363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.171 [2024-11-19 09:06:16.076371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.171 [2024-11-19 09:06:16.076392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid917084 for offline analysis/debug. 00:04:15.171 [2024-11-19 09:06:16.076958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.430 09:06:16 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.430 09:06:16 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:15.430 09:06:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.430 09:06:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.430 09:06:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.430 09:06:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.430 09:06:16 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.430 09:06:16 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.430 09:06:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.430 ************************************ 00:04:15.430 START TEST rpc_integrity 00:04:15.430 ************************************ 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:15.430 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.430 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.430 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.430 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.430 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.430 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.431 { 00:04:15.431 "name": "Malloc0", 00:04:15.431 "aliases": [ 00:04:15.431 "c0013345-2923-4090-bec2-a297b8529485" 00:04:15.431 ], 00:04:15.431 "product_name": "Malloc disk", 00:04:15.431 "block_size": 512, 00:04:15.431 "num_blocks": 16384, 00:04:15.431 "uuid": "c0013345-2923-4090-bec2-a297b8529485", 00:04:15.431 "assigned_rate_limits": { 00:04:15.431 "rw_ios_per_sec": 0, 00:04:15.431 "rw_mbytes_per_sec": 0, 00:04:15.431 "r_mbytes_per_sec": 0, 00:04:15.431 "w_mbytes_per_sec": 0 00:04:15.431 }, 00:04:15.431 "claimed": false, 00:04:15.431 "zoned": false, 00:04:15.431 "supported_io_types": { 00:04:15.431 "read": true, 00:04:15.431 "write": true, 00:04:15.431 "unmap": true, 00:04:15.431 "flush": true, 00:04:15.431 "reset": true, 00:04:15.431 "nvme_admin": false, 00:04:15.431 "nvme_io": false, 00:04:15.431 "nvme_io_md": false, 00:04:15.431 "write_zeroes": true, 00:04:15.431 "zcopy": true, 00:04:15.431 "get_zone_info": false, 00:04:15.431 "zone_management": false, 00:04:15.431 "zone_append": false, 00:04:15.431 "compare": false, 00:04:15.431 "compare_and_write": false, 00:04:15.431 "abort": true, 00:04:15.431 "seek_hole": false, 00:04:15.431 "seek_data": false, 00:04:15.431 "copy": true, 00:04:15.431 "nvme_iov_md": false 00:04:15.431 }, 00:04:15.431 "memory_domains": [ 00:04:15.431 { 00:04:15.431 "dma_device_id": "system", 00:04:15.431 "dma_device_type": 1 00:04:15.431 }, 00:04:15.431 { 00:04:15.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.431 "dma_device_type": 2 00:04:15.431 } 00:04:15.431 ], 00:04:15.431 "driver_specific": {} 00:04:15.431 } 00:04:15.431 ]' 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.431 [2024-11-19 09:06:16.459679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.431 [2024-11-19 09:06:16.459710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.431 [2024-11-19 09:06:16.459723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12307d0 00:04:15.431 [2024-11-19 09:06:16.459730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.431 [2024-11-19 09:06:16.460867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.431 [2024-11-19 09:06:16.460891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.431 Passthru0 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.431 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.431 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.690 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.690 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.690 { 00:04:15.690 "name": "Malloc0", 00:04:15.690 "aliases": [ 00:04:15.690 "c0013345-2923-4090-bec2-a297b8529485" 00:04:15.690 ], 00:04:15.690 "product_name": "Malloc disk", 00:04:15.690 "block_size": 512, 00:04:15.690 "num_blocks": 16384, 00:04:15.690 "uuid": "c0013345-2923-4090-bec2-a297b8529485", 00:04:15.690 "assigned_rate_limits": { 00:04:15.690 "rw_ios_per_sec": 0, 00:04:15.690 "rw_mbytes_per_sec": 0, 00:04:15.690 "r_mbytes_per_sec": 0, 00:04:15.690 "w_mbytes_per_sec": 0 00:04:15.690 }, 00:04:15.690 "claimed": true, 00:04:15.690 "claim_type": "exclusive_write", 00:04:15.690 "zoned": false, 00:04:15.690 "supported_io_types": { 00:04:15.690 "read": true, 00:04:15.690 "write": true, 00:04:15.690 "unmap": true, 00:04:15.690 "flush": true, 00:04:15.690 "reset": true, 00:04:15.690 "nvme_admin": false, 00:04:15.690 "nvme_io": false, 00:04:15.690 "nvme_io_md": false, 00:04:15.690 "write_zeroes": true, 00:04:15.690 "zcopy": true, 00:04:15.690 "get_zone_info": false, 00:04:15.690 "zone_management": false, 00:04:15.690 "zone_append": false, 00:04:15.690 "compare": false, 00:04:15.690 "compare_and_write": false, 00:04:15.690 "abort": true, 00:04:15.690 "seek_hole": false, 00:04:15.690 "seek_data": false, 00:04:15.690 "copy": true, 00:04:15.690 "nvme_iov_md": false 00:04:15.690 }, 00:04:15.690 "memory_domains": [ 00:04:15.690 { 00:04:15.690 "dma_device_id": "system", 00:04:15.690 "dma_device_type": 1 00:04:15.690 }, 00:04:15.690 { 00:04:15.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.690 "dma_device_type": 2 00:04:15.690 } 00:04:15.690 ], 00:04:15.690 "driver_specific": {} 00:04:15.690 }, 00:04:15.690 { 00:04:15.690 "name": "Passthru0", 00:04:15.690 "aliases": [ 00:04:15.690 "f460976e-ee76-5160-a417-9394c480d7f2" 00:04:15.690 ], 00:04:15.690 "product_name": "passthru", 00:04:15.690 "block_size": 512, 00:04:15.690 "num_blocks": 16384, 00:04:15.690 "uuid": "f460976e-ee76-5160-a417-9394c480d7f2", 00:04:15.690 "assigned_rate_limits": { 00:04:15.690 "rw_ios_per_sec": 0, 00:04:15.690 "rw_mbytes_per_sec": 0, 00:04:15.690 "r_mbytes_per_sec": 0, 00:04:15.690 "w_mbytes_per_sec": 0 00:04:15.690 }, 00:04:15.690 "claimed": false, 00:04:15.690 "zoned": false, 00:04:15.690 "supported_io_types": { 00:04:15.690 "read": true, 00:04:15.690 "write": true, 00:04:15.690 "unmap": true, 00:04:15.690 "flush": true, 00:04:15.690 "reset": true, 00:04:15.690 "nvme_admin": false, 00:04:15.690 "nvme_io": false, 00:04:15.690 "nvme_io_md": false, 00:04:15.690 "write_zeroes": true, 00:04:15.690 "zcopy": true, 00:04:15.690 "get_zone_info": false, 00:04:15.690 "zone_management": false, 00:04:15.690 "zone_append": false, 00:04:15.690 "compare": false, 00:04:15.690 "compare_and_write": false, 00:04:15.690 "abort": true, 00:04:15.690 "seek_hole": false, 00:04:15.690 "seek_data": false, 00:04:15.690 "copy": true, 00:04:15.690 "nvme_iov_md": false 00:04:15.690 }, 00:04:15.690 "memory_domains": [ 00:04:15.690 { 00:04:15.690 "dma_device_id": "system", 00:04:15.690 "dma_device_type": 1 00:04:15.691 }, 00:04:15.691 { 00:04:15.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.691 "dma_device_type": 2 00:04:15.691 } 00:04:15.691 ], 00:04:15.691 "driver_specific": { 00:04:15.691 "passthru": { 00:04:15.691 "name": "Passthru0", 00:04:15.691 "base_bdev_name": "Malloc0" 00:04:15.691 } 00:04:15.691 } 00:04:15.691 } 00:04:15.691 ]' 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.691 09:06:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.691 00:04:15.691 real 0m0.264s 00:04:15.691 user 0m0.162s 00:04:15.691 sys 0m0.039s 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.691 09:06:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 ************************************ 00:04:15.691 END TEST rpc_integrity 00:04:15.691 ************************************ 00:04:15.691 09:06:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.691 09:06:16 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.691 09:06:16 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.691 09:06:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 ************************************ 00:04:15.691 START TEST rpc_plugins 00:04:15.691 ************************************ 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:15.691 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.691 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.691 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.691 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.691 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.691 { 00:04:15.691 "name": "Malloc1", 00:04:15.691 "aliases": [ 00:04:15.691 "513c06bc-16fc-42c5-85ca-2f2507507d02" 00:04:15.691 ], 00:04:15.691 "product_name": "Malloc disk", 00:04:15.691 "block_size": 4096, 00:04:15.691 "num_blocks": 256, 00:04:15.691 "uuid": "513c06bc-16fc-42c5-85ca-2f2507507d02", 00:04:15.691 "assigned_rate_limits": { 00:04:15.691 "rw_ios_per_sec": 0, 00:04:15.691 "rw_mbytes_per_sec": 0, 00:04:15.691 "r_mbytes_per_sec": 0, 00:04:15.691 "w_mbytes_per_sec": 0 00:04:15.691 }, 00:04:15.691 "claimed": false, 00:04:15.691 "zoned": false, 00:04:15.691 "supported_io_types": { 00:04:15.691 "read": true, 00:04:15.691 "write": true, 00:04:15.691 "unmap": true, 00:04:15.691 "flush": true, 00:04:15.691 "reset": true, 00:04:15.691 "nvme_admin": false, 00:04:15.691 "nvme_io": false, 00:04:15.691 "nvme_io_md": false, 00:04:15.691 "write_zeroes": true, 00:04:15.691 "zcopy": true, 00:04:15.691 "get_zone_info": false, 00:04:15.691 "zone_management": false, 00:04:15.691 "zone_append": false, 00:04:15.691 "compare": false, 00:04:15.691 "compare_and_write": false, 00:04:15.691 "abort": true, 00:04:15.691 "seek_hole": false, 00:04:15.691 "seek_data": false, 00:04:15.691 "copy": true, 00:04:15.691 "nvme_iov_md": false 00:04:15.691 }, 00:04:15.691 "memory_domains": [ 00:04:15.691 { 00:04:15.691 "dma_device_id": "system", 00:04:15.691 "dma_device_type": 1 00:04:15.691 }, 00:04:15.691 { 00:04:15.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.691 "dma_device_type": 2 00:04:15.691 } 00:04:15.691 ], 00:04:15.691 "driver_specific": {} 00:04:15.691 } 00:04:15.691 ]' 00:04:15.691 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.950 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.950 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.950 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.950 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.950 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.950 09:06:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.950 00:04:15.950 real 0m0.143s 00:04:15.950 user 0m0.091s 00:04:15.950 sys 0m0.016s 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.950 09:06:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.950 ************************************ 00:04:15.950 END TEST rpc_plugins 00:04:15.950 ************************************ 00:04:15.950 09:06:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.950 09:06:16 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.950 09:06:16 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.950 09:06:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.950 ************************************ 00:04:15.950 START TEST rpc_trace_cmd_test 00:04:15.950 ************************************ 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.950 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.950 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid917084", 00:04:15.950 "tpoint_group_mask": "0x8", 00:04:15.950 "iscsi_conn": { 00:04:15.950 "mask": "0x2", 00:04:15.950 "tpoint_mask": "0x0" 00:04:15.950 }, 00:04:15.951 "scsi": { 00:04:15.951 "mask": "0x4", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "bdev": { 00:04:15.951 "mask": "0x8", 00:04:15.951 "tpoint_mask": "0xffffffffffffffff" 00:04:15.951 }, 00:04:15.951 "nvmf_rdma": { 00:04:15.951 "mask": "0x10", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "nvmf_tcp": { 00:04:15.951 "mask": "0x20", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "ftl": { 00:04:15.951 "mask": "0x40", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "blobfs": { 00:04:15.951 "mask": "0x80", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "dsa": { 00:04:15.951 "mask": "0x200", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "thread": { 00:04:15.951 "mask": "0x400", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "nvme_pcie": { 00:04:15.951 "mask": "0x800", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "iaa": { 00:04:15.951 "mask": "0x1000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "nvme_tcp": { 00:04:15.951 "mask": "0x2000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "bdev_nvme": { 00:04:15.951 "mask": "0x4000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "sock": { 00:04:15.951 "mask": "0x8000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "blob": { 00:04:15.951 "mask": "0x10000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "bdev_raid": { 00:04:15.951 "mask": "0x20000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 }, 00:04:15.951 "scheduler": { 00:04:15.951 "mask": "0x40000", 00:04:15.951 "tpoint_mask": "0x0" 00:04:15.951 } 00:04:15.951 }' 00:04:15.951 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.951 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:15.951 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.951 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.951 09:06:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.210 00:04:16.210 real 0m0.189s 00:04:16.210 user 0m0.156s 00:04:16.210 sys 0m0.027s 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.210 09:06:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.210 ************************************ 00:04:16.210 END TEST rpc_trace_cmd_test 00:04:16.210 ************************************ 00:04:16.210 09:06:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.210 09:06:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.210 09:06:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.210 09:06:17 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.210 09:06:17 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.210 09:06:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.210 ************************************ 00:04:16.210 START TEST rpc_daemon_integrity 00:04:16.210 ************************************ 00:04:16.210 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:16.210 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.210 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.211 { 00:04:16.211 "name": "Malloc2", 00:04:16.211 "aliases": [ 00:04:16.211 "bced99c5-1c5a-429c-a3c3-1acd35d5bc21" 00:04:16.211 ], 00:04:16.211 "product_name": "Malloc disk", 00:04:16.211 "block_size": 512, 00:04:16.211 "num_blocks": 16384, 00:04:16.211 "uuid": "bced99c5-1c5a-429c-a3c3-1acd35d5bc21", 00:04:16.211 "assigned_rate_limits": { 00:04:16.211 "rw_ios_per_sec": 0, 00:04:16.211 "rw_mbytes_per_sec": 0, 00:04:16.211 "r_mbytes_per_sec": 0, 00:04:16.211 "w_mbytes_per_sec": 0 00:04:16.211 }, 00:04:16.211 "claimed": false, 00:04:16.211 "zoned": false, 00:04:16.211 "supported_io_types": { 00:04:16.211 "read": true, 00:04:16.211 "write": true, 00:04:16.211 "unmap": true, 00:04:16.211 "flush": true, 00:04:16.211 "reset": true, 00:04:16.211 "nvme_admin": false, 00:04:16.211 "nvme_io": false, 00:04:16.211 "nvme_io_md": false, 00:04:16.211 "write_zeroes": true, 00:04:16.211 "zcopy": true, 00:04:16.211 "get_zone_info": false, 00:04:16.211 "zone_management": false, 00:04:16.211 "zone_append": false, 00:04:16.211 "compare": false, 00:04:16.211 "compare_and_write": false, 00:04:16.211 "abort": true, 00:04:16.211 "seek_hole": false, 00:04:16.211 "seek_data": false, 00:04:16.211 "copy": true, 00:04:16.211 "nvme_iov_md": false 00:04:16.211 }, 00:04:16.211 "memory_domains": [ 00:04:16.211 { 00:04:16.211 "dma_device_id": "system", 00:04:16.211 "dma_device_type": 1 00:04:16.211 }, 00:04:16.211 { 00:04:16.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.211 "dma_device_type": 2 00:04:16.211 } 00:04:16.211 ], 00:04:16.211 "driver_specific": {} 00:04:16.211 } 00:04:16.211 ]' 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.211 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 [2024-11-19 09:06:17.269900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.471 [2024-11-19 09:06:17.269929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.471 [2024-11-19 09:06:17.269941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c0f60 00:04:16.471 [2024-11-19 09:06:17.269953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.471 [2024-11-19 09:06:17.271093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.471 [2024-11-19 09:06:17.271114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.471 Passthru0 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.471 { 00:04:16.471 "name": "Malloc2", 00:04:16.471 "aliases": [ 00:04:16.471 "bced99c5-1c5a-429c-a3c3-1acd35d5bc21" 00:04:16.471 ], 00:04:16.471 "product_name": "Malloc disk", 00:04:16.471 "block_size": 512, 00:04:16.471 "num_blocks": 16384, 00:04:16.471 "uuid": "bced99c5-1c5a-429c-a3c3-1acd35d5bc21", 00:04:16.471 "assigned_rate_limits": { 00:04:16.471 "rw_ios_per_sec": 0, 00:04:16.471 "rw_mbytes_per_sec": 0, 00:04:16.471 "r_mbytes_per_sec": 0, 00:04:16.471 "w_mbytes_per_sec": 0 00:04:16.471 }, 00:04:16.471 "claimed": true, 00:04:16.471 "claim_type": "exclusive_write", 00:04:16.471 "zoned": false, 00:04:16.471 "supported_io_types": { 00:04:16.471 "read": true, 00:04:16.471 "write": true, 00:04:16.471 "unmap": true, 00:04:16.471 "flush": true, 00:04:16.471 "reset": true, 00:04:16.471 "nvme_admin": false, 00:04:16.471 "nvme_io": false, 00:04:16.471 "nvme_io_md": false, 00:04:16.471 "write_zeroes": true, 00:04:16.471 "zcopy": true, 00:04:16.471 "get_zone_info": false, 00:04:16.471 "zone_management": false, 00:04:16.471 "zone_append": false, 00:04:16.471 "compare": false, 00:04:16.471 "compare_and_write": false, 00:04:16.471 "abort": true, 00:04:16.471 "seek_hole": false, 00:04:16.471 "seek_data": false, 00:04:16.471 "copy": true, 00:04:16.471 "nvme_iov_md": false 00:04:16.471 }, 00:04:16.471 "memory_domains": [ 00:04:16.471 { 00:04:16.471 "dma_device_id": "system", 00:04:16.471 "dma_device_type": 1 00:04:16.471 }, 00:04:16.471 { 00:04:16.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.471 "dma_device_type": 2 00:04:16.471 } 00:04:16.471 ], 00:04:16.471 "driver_specific": {} 00:04:16.471 }, 00:04:16.471 { 00:04:16.471 "name": "Passthru0", 00:04:16.471 "aliases": [ 00:04:16.471 "bbab32aa-91a7-562b-a0c5-a389090b6937" 00:04:16.471 ], 00:04:16.471 "product_name": "passthru", 00:04:16.471 "block_size": 512, 00:04:16.471 "num_blocks": 16384, 00:04:16.471 "uuid": "bbab32aa-91a7-562b-a0c5-a389090b6937", 00:04:16.471 "assigned_rate_limits": { 00:04:16.471 "rw_ios_per_sec": 0, 00:04:16.471 "rw_mbytes_per_sec": 0, 00:04:16.471 "r_mbytes_per_sec": 0, 00:04:16.471 "w_mbytes_per_sec": 0 00:04:16.471 }, 00:04:16.471 "claimed": false, 00:04:16.471 "zoned": false, 00:04:16.471 "supported_io_types": { 00:04:16.471 "read": true, 00:04:16.471 "write": true, 00:04:16.471 "unmap": true, 00:04:16.471 "flush": true, 00:04:16.471 "reset": true, 00:04:16.471 "nvme_admin": false, 00:04:16.471 "nvme_io": false, 00:04:16.471 "nvme_io_md": false, 00:04:16.471 "write_zeroes": true, 00:04:16.471 "zcopy": true, 00:04:16.471 "get_zone_info": false, 00:04:16.471 "zone_management": false, 00:04:16.471 "zone_append": false, 00:04:16.471 "compare": false, 00:04:16.471 "compare_and_write": false, 00:04:16.471 "abort": true, 00:04:16.471 "seek_hole": false, 00:04:16.471 "seek_data": false, 00:04:16.471 "copy": true, 00:04:16.471 "nvme_iov_md": false 00:04:16.471 }, 00:04:16.471 "memory_domains": [ 00:04:16.471 { 00:04:16.471 "dma_device_id": "system", 00:04:16.471 "dma_device_type": 1 00:04:16.471 }, 00:04:16.471 { 00:04:16.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.471 "dma_device_type": 2 00:04:16.471 } 00:04:16.471 ], 00:04:16.471 "driver_specific": { 00:04:16.471 "passthru": { 00:04:16.471 "name": "Passthru0", 00:04:16.471 "base_bdev_name": "Malloc2" 00:04:16.471 } 00:04:16.471 } 00:04:16.471 } 00:04:16.471 ]' 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.471 00:04:16.471 real 0m0.275s 00:04:16.471 user 0m0.179s 00:04:16.471 sys 0m0.035s 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.471 09:06:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.471 ************************************ 00:04:16.471 END TEST rpc_daemon_integrity 00:04:16.471 ************************************ 00:04:16.471 09:06:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.471 09:06:17 rpc -- rpc/rpc.sh@84 -- # killprocess 917084 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@952 -- # '[' -z 917084 ']' 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@956 -- # kill -0 917084 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@957 -- # uname 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 917084 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 917084' 00:04:16.471 killing process with pid 917084 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@971 -- # kill 917084 00:04:16.471 09:06:17 rpc -- common/autotest_common.sh@976 -- # wait 917084 00:04:17.039 00:04:17.039 real 0m2.068s 00:04:17.039 user 0m2.621s 00:04:17.039 sys 0m0.685s 00:04:17.039 09:06:17 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:17.039 09:06:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.039 ************************************ 00:04:17.039 END TEST rpc 00:04:17.039 ************************************ 00:04:17.039 09:06:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.039 09:06:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:17.039 09:06:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.039 09:06:17 -- common/autotest_common.sh@10 -- # set +x 00:04:17.039 ************************************ 00:04:17.039 START TEST skip_rpc 00:04:17.039 ************************************ 00:04:17.039 09:06:17 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.039 * Looking for test storage... 00:04:17.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.039 09:06:17 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:17.039 09:06:17 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:17.039 09:06:17 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:17.039 09:06:18 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.039 09:06:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.039 09:06:18 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.039 09:06:18 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:17.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.039 --rc genhtml_branch_coverage=1 00:04:17.039 --rc genhtml_function_coverage=1 00:04:17.039 --rc genhtml_legend=1 00:04:17.039 --rc geninfo_all_blocks=1 00:04:17.039 --rc geninfo_unexecuted_blocks=1 00:04:17.039 00:04:17.039 ' 00:04:17.039 09:06:18 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:17.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.039 --rc genhtml_branch_coverage=1 00:04:17.039 --rc genhtml_function_coverage=1 00:04:17.039 --rc genhtml_legend=1 00:04:17.039 --rc geninfo_all_blocks=1 00:04:17.039 --rc geninfo_unexecuted_blocks=1 00:04:17.039 00:04:17.039 ' 00:04:17.039 09:06:18 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:17.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.039 --rc genhtml_branch_coverage=1 00:04:17.039 --rc genhtml_function_coverage=1 00:04:17.039 --rc genhtml_legend=1 00:04:17.039 --rc geninfo_all_blocks=1 00:04:17.039 --rc geninfo_unexecuted_blocks=1 00:04:17.040 00:04:17.040 ' 00:04:17.040 09:06:18 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:17.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.040 --rc genhtml_branch_coverage=1 00:04:17.040 --rc genhtml_function_coverage=1 00:04:17.040 --rc genhtml_legend=1 00:04:17.040 --rc geninfo_all_blocks=1 00:04:17.040 --rc geninfo_unexecuted_blocks=1 00:04:17.040 00:04:17.040 ' 00:04:17.040 09:06:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.040 09:06:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.040 09:06:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.040 09:06:18 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:17.040 09:06:18 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:17.040 09:06:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.040 ************************************ 00:04:17.040 START TEST skip_rpc 00:04:17.040 ************************************ 00:04:17.040 09:06:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:17.040 09:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=917718 00:04:17.040 09:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.040 09:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.040 09:06:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.298 [2024-11-19 09:06:18.127430] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:17.298 [2024-11-19 09:06:18.127469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917718 ] 00:04:17.298 [2024-11-19 09:06:18.201117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.298 [2024-11-19 09:06:18.241202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 917718 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 917718 ']' 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 917718 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 917718 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 917718' 00:04:22.571 killing process with pid 917718 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 917718 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 917718 00:04:22.571 00:04:22.571 real 0m5.366s 00:04:22.571 user 0m5.113s 00:04:22.571 sys 0m0.289s 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.571 09:06:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.571 ************************************ 00:04:22.571 END TEST skip_rpc 00:04:22.571 ************************************ 00:04:22.571 09:06:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:22.571 09:06:23 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.571 09:06:23 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.571 09:06:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.571 ************************************ 00:04:22.571 START TEST skip_rpc_with_json 00:04:22.571 ************************************ 00:04:22.571 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:22.571 09:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:22.571 09:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=918658 00:04:22.571 09:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.571 09:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.571 09:06:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 918658 00:04:22.572 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 918658 ']' 00:04:22.572 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.572 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.572 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.572 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.572 09:06:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.572 [2024-11-19 09:06:23.569328] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:22.572 [2024-11-19 09:06:23.569374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918658 ] 00:04:22.831 [2024-11-19 09:06:23.646724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.831 [2024-11-19 09:06:23.688532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.398 [2024-11-19 09:06:24.408986] nvmf_rpc.c:2868:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:23.398 request: 00:04:23.398 { 00:04:23.398 "trtype": "tcp", 00:04:23.398 "method": "nvmf_get_transports", 00:04:23.398 "req_id": 1 00:04:23.398 } 00:04:23.398 Got JSON-RPC error response 00:04:23.398 response: 00:04:23.398 { 00:04:23.398 "code": -19, 00:04:23.398 "message": "No such device" 00:04:23.398 } 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.398 [2024-11-19 09:06:24.421092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.398 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.659 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.659 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.659 { 00:04:23.659 "subsystems": [ 00:04:23.659 { 00:04:23.659 "subsystem": "fsdev", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "fsdev_set_opts", 00:04:23.659 "params": { 00:04:23.659 "fsdev_io_pool_size": 65535, 00:04:23.659 "fsdev_io_cache_size": 256 00:04:23.659 } 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "vfio_user_target", 00:04:23.659 "config": null 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "keyring", 00:04:23.659 "config": [] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "iobuf", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "iobuf_set_options", 00:04:23.659 "params": { 00:04:23.659 "small_pool_count": 8192, 00:04:23.659 "large_pool_count": 1024, 00:04:23.659 "small_bufsize": 8192, 00:04:23.659 "large_bufsize": 135168, 00:04:23.659 "enable_numa": false 00:04:23.659 } 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "sock", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "sock_set_default_impl", 00:04:23.659 "params": { 00:04:23.659 "impl_name": "posix" 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "sock_impl_set_options", 00:04:23.659 "params": { 00:04:23.659 "impl_name": "ssl", 00:04:23.659 "recv_buf_size": 4096, 00:04:23.659 "send_buf_size": 4096, 00:04:23.659 "enable_recv_pipe": true, 00:04:23.659 "enable_quickack": false, 00:04:23.659 "enable_placement_id": 0, 00:04:23.659 "enable_zerocopy_send_server": true, 00:04:23.659 "enable_zerocopy_send_client": false, 00:04:23.659 "zerocopy_threshold": 0, 00:04:23.659 "tls_version": 0, 00:04:23.659 "enable_ktls": false 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "sock_impl_set_options", 00:04:23.659 "params": { 00:04:23.659 "impl_name": "posix", 00:04:23.659 "recv_buf_size": 2097152, 00:04:23.659 "send_buf_size": 2097152, 00:04:23.659 "enable_recv_pipe": true, 00:04:23.659 "enable_quickack": false, 00:04:23.659 "enable_placement_id": 0, 00:04:23.659 "enable_zerocopy_send_server": true, 00:04:23.659 "enable_zerocopy_send_client": false, 00:04:23.659 "zerocopy_threshold": 0, 00:04:23.659 "tls_version": 0, 00:04:23.659 "enable_ktls": false 00:04:23.659 } 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "vmd", 00:04:23.659 "config": [] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "accel", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "accel_set_options", 00:04:23.659 "params": { 00:04:23.659 "small_cache_size": 128, 00:04:23.659 "large_cache_size": 16, 00:04:23.659 "task_count": 2048, 00:04:23.659 "sequence_count": 2048, 00:04:23.659 "buf_count": 2048 00:04:23.659 } 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "bdev", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "bdev_set_options", 00:04:23.659 "params": { 00:04:23.659 "bdev_io_pool_size": 65535, 00:04:23.659 "bdev_io_cache_size": 256, 00:04:23.659 "bdev_auto_examine": true, 00:04:23.659 "iobuf_small_cache_size": 128, 00:04:23.659 "iobuf_large_cache_size": 16 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "bdev_raid_set_options", 00:04:23.659 "params": { 00:04:23.659 "process_window_size_kb": 1024, 00:04:23.659 "process_max_bandwidth_mb_sec": 0 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "bdev_iscsi_set_options", 00:04:23.659 "params": { 00:04:23.659 "timeout_sec": 30 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "bdev_nvme_set_options", 00:04:23.659 "params": { 00:04:23.659 "action_on_timeout": "none", 00:04:23.659 "timeout_us": 0, 00:04:23.659 "timeout_admin_us": 0, 00:04:23.659 "keep_alive_timeout_ms": 10000, 00:04:23.659 "arbitration_burst": 0, 00:04:23.659 "low_priority_weight": 0, 00:04:23.659 "medium_priority_weight": 0, 00:04:23.659 "high_priority_weight": 0, 00:04:23.659 "nvme_adminq_poll_period_us": 10000, 00:04:23.659 "nvme_ioq_poll_period_us": 0, 00:04:23.659 "io_queue_requests": 0, 00:04:23.659 "delay_cmd_submit": true, 00:04:23.659 "transport_retry_count": 4, 00:04:23.659 "bdev_retry_count": 3, 00:04:23.659 "transport_ack_timeout": 0, 00:04:23.659 "ctrlr_loss_timeout_sec": 0, 00:04:23.659 "reconnect_delay_sec": 0, 00:04:23.659 "fast_io_fail_timeout_sec": 0, 00:04:23.659 "disable_auto_failback": false, 00:04:23.659 "generate_uuids": false, 00:04:23.659 "transport_tos": 0, 00:04:23.659 "nvme_error_stat": false, 00:04:23.659 "rdma_srq_size": 0, 00:04:23.659 "io_path_stat": false, 00:04:23.659 "allow_accel_sequence": false, 00:04:23.659 "rdma_max_cq_size": 0, 00:04:23.659 "rdma_cm_event_timeout_ms": 0, 00:04:23.659 "dhchap_digests": [ 00:04:23.659 "sha256", 00:04:23.659 "sha384", 00:04:23.659 "sha512" 00:04:23.659 ], 00:04:23.659 "dhchap_dhgroups": [ 00:04:23.659 "null", 00:04:23.659 "ffdhe2048", 00:04:23.659 "ffdhe3072", 00:04:23.659 "ffdhe4096", 00:04:23.659 "ffdhe6144", 00:04:23.659 "ffdhe8192" 00:04:23.659 ] 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "bdev_nvme_set_hotplug", 00:04:23.659 "params": { 00:04:23.659 "period_us": 100000, 00:04:23.659 "enable": false 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "bdev_wait_for_examine" 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "scsi", 00:04:23.659 "config": null 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "scheduler", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "framework_set_scheduler", 00:04:23.659 "params": { 00:04:23.659 "name": "static" 00:04:23.659 } 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "vhost_scsi", 00:04:23.659 "config": [] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "vhost_blk", 00:04:23.659 "config": [] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "ublk", 00:04:23.659 "config": [] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "nbd", 00:04:23.659 "config": [] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "nvmf", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "nvmf_set_config", 00:04:23.659 "params": { 00:04:23.659 "discovery_filter": "match_any", 00:04:23.659 "admin_cmd_passthru": { 00:04:23.659 "identify_ctrlr": false 00:04:23.659 }, 00:04:23.659 "dhchap_digests": [ 00:04:23.659 "sha256", 00:04:23.659 "sha384", 00:04:23.659 "sha512" 00:04:23.659 ], 00:04:23.659 "dhchap_dhgroups": [ 00:04:23.659 "null", 00:04:23.659 "ffdhe2048", 00:04:23.659 "ffdhe3072", 00:04:23.659 "ffdhe4096", 00:04:23.659 "ffdhe6144", 00:04:23.659 "ffdhe8192" 00:04:23.659 ] 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "nvmf_set_max_subsystems", 00:04:23.659 "params": { 00:04:23.659 "max_subsystems": 1024 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "nvmf_set_crdt", 00:04:23.659 "params": { 00:04:23.659 "crdt1": 0, 00:04:23.659 "crdt2": 0, 00:04:23.659 "crdt3": 0 00:04:23.659 } 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "method": "nvmf_create_transport", 00:04:23.659 "params": { 00:04:23.659 "trtype": "TCP", 00:04:23.659 "max_queue_depth": 128, 00:04:23.659 "max_io_qpairs_per_ctrlr": 127, 00:04:23.659 "in_capsule_data_size": 4096, 00:04:23.659 "max_io_size": 131072, 00:04:23.659 "io_unit_size": 131072, 00:04:23.659 "max_aq_depth": 128, 00:04:23.659 "num_shared_buffers": 511, 00:04:23.659 "buf_cache_size": 4294967295, 00:04:23.659 "dif_insert_or_strip": false, 00:04:23.659 "zcopy": false, 00:04:23.659 "c2h_success": true, 00:04:23.659 "sock_priority": 0, 00:04:23.659 "abort_timeout_sec": 1, 00:04:23.659 "ack_timeout": 0, 00:04:23.659 "data_wr_pool_size": 0 00:04:23.659 } 00:04:23.659 } 00:04:23.659 ] 00:04:23.659 }, 00:04:23.659 { 00:04:23.659 "subsystem": "iscsi", 00:04:23.659 "config": [ 00:04:23.659 { 00:04:23.659 "method": "iscsi_set_options", 00:04:23.660 "params": { 00:04:23.660 "node_base": "iqn.2016-06.io.spdk", 00:04:23.660 "max_sessions": 128, 00:04:23.660 "max_connections_per_session": 2, 00:04:23.660 "max_queue_depth": 64, 00:04:23.660 "default_time2wait": 2, 00:04:23.660 "default_time2retain": 20, 00:04:23.660 "first_burst_length": 8192, 00:04:23.660 "immediate_data": true, 00:04:23.660 "allow_duplicated_isid": false, 00:04:23.660 "error_recovery_level": 0, 00:04:23.660 "nop_timeout": 60, 00:04:23.660 "nop_in_interval": 30, 00:04:23.660 "disable_chap": false, 00:04:23.660 "require_chap": false, 00:04:23.660 "mutual_chap": false, 00:04:23.660 "chap_group": 0, 00:04:23.660 "max_large_datain_per_connection": 64, 00:04:23.660 "max_r2t_per_connection": 4, 00:04:23.660 "pdu_pool_size": 36864, 00:04:23.660 "immediate_data_pool_size": 16384, 00:04:23.660 "data_out_pool_size": 2048 00:04:23.660 } 00:04:23.660 } 00:04:23.660 ] 00:04:23.660 } 00:04:23.660 ] 00:04:23.660 } 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 918658 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 918658 ']' 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 918658 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 918658 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 918658' 00:04:23.660 killing process with pid 918658 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 918658 00:04:23.660 09:06:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 918658 00:04:23.919 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=918901 00:04:23.920 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.920 09:06:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:29.196 09:06:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 918901 00:04:29.196 09:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 918901 ']' 00:04:29.196 09:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 918901 00:04:29.196 09:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:29.196 09:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:29.196 09:06:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 918901 00:04:29.196 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:29.196 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:29.196 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 918901' 00:04:29.196 killing process with pid 918901 00:04:29.196 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 918901 00:04:29.196 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 918901 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.456 00:04:29.456 real 0m6.809s 00:04:29.456 user 0m6.626s 00:04:29.456 sys 0m0.668s 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.456 ************************************ 00:04:29.456 END TEST skip_rpc_with_json 00:04:29.456 ************************************ 00:04:29.456 09:06:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:29.456 09:06:30 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.456 09:06:30 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.456 09:06:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.456 ************************************ 00:04:29.456 START TEST skip_rpc_with_delay 00:04:29.456 ************************************ 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.456 [2024-11-19 09:06:30.447524] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:29.456 00:04:29.456 real 0m0.072s 00:04:29.456 user 0m0.047s 00:04:29.456 sys 0m0.024s 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.456 09:06:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:29.456 ************************************ 00:04:29.456 END TEST skip_rpc_with_delay 00:04:29.456 ************************************ 00:04:29.456 09:06:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:29.456 09:06:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:29.456 09:06:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:29.456 09:06:30 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.456 09:06:30 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.456 09:06:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.716 ************************************ 00:04:29.716 START TEST exit_on_failed_rpc_init 00:04:29.716 ************************************ 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=919878 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 919878 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 919878 ']' 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.716 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.716 [2024-11-19 09:06:30.589811] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:29.716 [2024-11-19 09:06:30.589854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919878 ] 00:04:29.716 [2024-11-19 09:06:30.665839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.716 [2024-11-19 09:06:30.708661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.975 09:06:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.975 [2024-11-19 09:06:30.981121] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:29.975 [2024-11-19 09:06:30.981166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid919887 ] 00:04:30.234 [2024-11-19 09:06:31.053741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.234 [2024-11-19 09:06:31.094824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.234 [2024-11-19 09:06:31.094878] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:30.234 [2024-11-19 09:06:31.094888] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:30.234 [2024-11-19 09:06:31.094893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:30.234 09:06:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 919878 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 919878 ']' 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 919878 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 919878 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 919878' 00:04:30.235 killing process with pid 919878 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 919878 00:04:30.235 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 919878 00:04:30.494 00:04:30.494 real 0m0.956s 00:04:30.494 user 0m1.016s 00:04:30.494 sys 0m0.394s 00:04:30.495 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.495 09:06:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.495 ************************************ 00:04:30.495 END TEST exit_on_failed_rpc_init 00:04:30.495 ************************************ 00:04:30.495 09:06:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.495 00:04:30.495 real 0m13.657s 00:04:30.495 user 0m12.996s 00:04:30.495 sys 0m1.667s 00:04:30.495 09:06:31 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.495 09:06:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.495 ************************************ 00:04:30.495 END TEST skip_rpc 00:04:30.495 ************************************ 00:04:30.755 09:06:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.755 09:06:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.755 09:06:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.755 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:30.755 ************************************ 00:04:30.755 START TEST rpc_client 00:04:30.755 ************************************ 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.755 * Looking for test storage... 00:04:30.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.755 09:06:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:30.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.755 --rc genhtml_branch_coverage=1 00:04:30.755 --rc genhtml_function_coverage=1 00:04:30.755 --rc genhtml_legend=1 00:04:30.755 --rc geninfo_all_blocks=1 00:04:30.755 --rc geninfo_unexecuted_blocks=1 00:04:30.755 00:04:30.755 ' 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:30.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.755 --rc genhtml_branch_coverage=1 00:04:30.755 --rc genhtml_function_coverage=1 00:04:30.755 --rc genhtml_legend=1 00:04:30.755 --rc geninfo_all_blocks=1 00:04:30.755 --rc geninfo_unexecuted_blocks=1 00:04:30.755 00:04:30.755 ' 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:30.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.755 --rc genhtml_branch_coverage=1 00:04:30.755 --rc genhtml_function_coverage=1 00:04:30.755 --rc genhtml_legend=1 00:04:30.755 --rc geninfo_all_blocks=1 00:04:30.755 --rc geninfo_unexecuted_blocks=1 00:04:30.755 00:04:30.755 ' 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:30.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.755 --rc genhtml_branch_coverage=1 00:04:30.755 --rc genhtml_function_coverage=1 00:04:30.755 --rc genhtml_legend=1 00:04:30.755 --rc geninfo_all_blocks=1 00:04:30.755 --rc geninfo_unexecuted_blocks=1 00:04:30.755 00:04:30.755 ' 00:04:30.755 09:06:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:30.755 OK 00:04:30.755 09:06:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:30.755 00:04:30.755 real 0m0.195s 00:04:30.755 user 0m0.121s 00:04:30.755 sys 0m0.087s 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.755 09:06:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:30.755 ************************************ 00:04:30.755 END TEST rpc_client 00:04:30.755 ************************************ 00:04:31.016 09:06:31 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:31.016 09:06:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.016 09:06:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.016 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:04:31.016 ************************************ 00:04:31.016 START TEST json_config 00:04:31.016 ************************************ 00:04:31.016 09:06:31 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:31.016 09:06:31 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:31.016 09:06:31 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:31.016 09:06:31 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:31.016 09:06:31 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:31.016 09:06:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.016 09:06:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.016 09:06:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.016 09:06:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.016 09:06:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.016 09:06:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.016 09:06:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.016 09:06:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.016 09:06:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.016 09:06:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:31.016 09:06:31 json_config -- scripts/common.sh@345 -- # : 1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.016 09:06:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.016 09:06:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@353 -- # local d=1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.016 09:06:31 json_config -- scripts/common.sh@355 -- # echo 1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.016 09:06:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:31.016 09:06:32 json_config -- scripts/common.sh@353 -- # local d=2 00:04:31.016 09:06:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.016 09:06:32 json_config -- scripts/common.sh@355 -- # echo 2 00:04:31.016 09:06:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.016 09:06:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.016 09:06:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.016 09:06:32 json_config -- scripts/common.sh@368 -- # return 0 00:04:31.016 09:06:32 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.016 09:06:32 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.016 --rc genhtml_branch_coverage=1 00:04:31.016 --rc genhtml_function_coverage=1 00:04:31.016 --rc genhtml_legend=1 00:04:31.016 --rc geninfo_all_blocks=1 00:04:31.016 --rc geninfo_unexecuted_blocks=1 00:04:31.016 00:04:31.016 ' 00:04:31.016 09:06:32 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.016 --rc genhtml_branch_coverage=1 00:04:31.016 --rc genhtml_function_coverage=1 00:04:31.016 --rc genhtml_legend=1 00:04:31.016 --rc geninfo_all_blocks=1 00:04:31.016 --rc geninfo_unexecuted_blocks=1 00:04:31.016 00:04:31.016 ' 00:04:31.016 09:06:32 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.016 --rc genhtml_branch_coverage=1 00:04:31.016 --rc genhtml_function_coverage=1 00:04:31.016 --rc genhtml_legend=1 00:04:31.016 --rc geninfo_all_blocks=1 00:04:31.016 --rc geninfo_unexecuted_blocks=1 00:04:31.016 00:04:31.016 ' 00:04:31.016 09:06:32 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:31.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.016 --rc genhtml_branch_coverage=1 00:04:31.016 --rc genhtml_function_coverage=1 00:04:31.016 --rc genhtml_legend=1 00:04:31.016 --rc geninfo_all_blocks=1 00:04:31.016 --rc geninfo_unexecuted_blocks=1 00:04:31.016 00:04:31.016 ' 00:04:31.016 09:06:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.016 09:06:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:31.016 09:06:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.016 09:06:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.016 09:06:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.016 09:06:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.016 09:06:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.016 09:06:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.016 09:06:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.016 09:06:32 json_config -- paths/export.sh@5 -- # export PATH 00:04:31.017 09:06:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@51 -- # : 0 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.017 09:06:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:31.017 INFO: JSON configuration test init 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.017 09:06:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:31.017 09:06:32 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.017 09:06:32 json_config -- json_config/common.sh@10 -- # shift 00:04:31.017 09:06:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.017 09:06:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.017 09:06:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.017 09:06:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.017 09:06:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.017 09:06:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=920239 00:04:31.017 09:06:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.017 Waiting for target to run... 00:04:31.017 09:06:32 json_config -- json_config/common.sh@25 -- # waitforlisten 920239 /var/tmp/spdk_tgt.sock 00:04:31.017 09:06:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@833 -- # '[' -z 920239 ']' 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.017 09:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.276 [2024-11-19 09:06:32.108564] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:31.276 [2024-11-19 09:06:32.108609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid920239 ] 00:04:31.536 [2024-11-19 09:06:32.398373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.536 [2024-11-19 09:06:32.433759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.104 09:06:32 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.104 09:06:32 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:32.104 09:06:32 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.104 00:04:32.104 09:06:32 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:32.104 09:06:32 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:32.104 09:06:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.104 09:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 09:06:32 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:32.104 09:06:32 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:32.104 09:06:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.104 09:06:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.104 09:06:32 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:32.104 09:06:32 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:32.104 09:06:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:35.396 09:06:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.396 09:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:35.396 09:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@54 -- # sort 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:35.396 09:06:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.396 09:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:35.396 09:06:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.396 09:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:35.396 09:06:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.396 09:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.655 MallocForNvmf0 00:04:35.655 09:06:36 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.655 09:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.914 MallocForNvmf1 00:04:35.914 09:06:36 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.914 09:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.914 [2024-11-19 09:06:36.910976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.914 09:06:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.914 09:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:36.173 09:06:37 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.173 09:06:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.432 09:06:37 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.432 09:06:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.691 09:06:37 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.691 09:06:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.691 [2024-11-19 09:06:37.717484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.691 09:06:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:36.691 09:06:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.691 09:06:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.950 09:06:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:36.950 09:06:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.950 09:06:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.950 09:06:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:36.950 09:06:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.950 09:06:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.951 MallocBdevForConfigChangeCheck 00:04:36.951 09:06:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:36.951 09:06:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.951 09:06:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.210 09:06:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:37.210 09:06:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.469 09:06:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:37.469 INFO: shutting down applications... 00:04:37.469 09:06:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:37.469 09:06:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:37.469 09:06:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:37.469 09:06:38 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:39.371 Calling clear_iscsi_subsystem 00:04:39.371 Calling clear_nvmf_subsystem 00:04:39.371 Calling clear_nbd_subsystem 00:04:39.371 Calling clear_ublk_subsystem 00:04:39.371 Calling clear_vhost_blk_subsystem 00:04:39.371 Calling clear_vhost_scsi_subsystem 00:04:39.371 Calling clear_bdev_subsystem 00:04:39.371 09:06:39 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:39.371 09:06:39 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:39.371 09:06:39 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:39.371 09:06:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.371 09:06:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:39.371 09:06:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:39.371 09:06:40 json_config -- json_config/json_config.sh@352 -- # break 00:04:39.371 09:06:40 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:39.371 09:06:40 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:39.371 09:06:40 json_config -- json_config/common.sh@31 -- # local app=target 00:04:39.371 09:06:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.371 09:06:40 json_config -- json_config/common.sh@35 -- # [[ -n 920239 ]] 00:04:39.371 09:06:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 920239 00:04:39.371 09:06:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.371 09:06:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.371 09:06:40 json_config -- json_config/common.sh@41 -- # kill -0 920239 00:04:39.371 09:06:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.938 09:06:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.938 09:06:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.938 09:06:40 json_config -- json_config/common.sh@41 -- # kill -0 920239 00:04:39.938 09:06:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.938 09:06:40 json_config -- json_config/common.sh@43 -- # break 00:04:39.938 09:06:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.938 09:06:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.938 SPDK target shutdown done 00:04:39.938 09:06:40 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:39.938 INFO: relaunching applications... 00:04:39.938 09:06:40 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.938 09:06:40 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.938 09:06:40 json_config -- json_config/common.sh@10 -- # shift 00:04:39.938 09:06:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.938 09:06:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.938 09:06:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.938 09:06:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.938 09:06:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.938 09:06:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=921761 00:04:39.938 09:06:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.938 Waiting for target to run... 00:04:39.938 09:06:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.938 09:06:40 json_config -- json_config/common.sh@25 -- # waitforlisten 921761 /var/tmp/spdk_tgt.sock 00:04:39.938 09:06:40 json_config -- common/autotest_common.sh@833 -- # '[' -z 921761 ']' 00:04:39.938 09:06:40 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.938 09:06:40 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.939 09:06:40 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.939 09:06:40 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.939 09:06:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.939 [2024-11-19 09:06:40.920633] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:39.939 [2024-11-19 09:06:40.920694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid921761 ] 00:04:40.506 [2024-11-19 09:06:41.393442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.506 [2024-11-19 09:06:41.442303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.796 [2024-11-19 09:06:44.477171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.796 [2024-11-19 09:06:44.509547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.365 09:06:45 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.365 09:06:45 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:44.365 09:06:45 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.365 00:04:44.365 09:06:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:44.365 09:06:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:44.365 INFO: Checking if target configuration is the same... 00:04:44.365 09:06:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:44.365 09:06:45 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.365 09:06:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.365 + '[' 2 -ne 2 ']' 00:04:44.365 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:44.365 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:44.365 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.365 +++ basename /dev/fd/62 00:04:44.365 ++ mktemp /tmp/62.XXX 00:04:44.365 + tmp_file_1=/tmp/62.cpA 00:04:44.365 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.365 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.365 + tmp_file_2=/tmp/spdk_tgt_config.json.fyO 00:04:44.365 + ret=0 00:04:44.365 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.624 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.624 + diff -u /tmp/62.cpA /tmp/spdk_tgt_config.json.fyO 00:04:44.624 + echo 'INFO: JSON config files are the same' 00:04:44.624 INFO: JSON config files are the same 00:04:44.624 + rm /tmp/62.cpA /tmp/spdk_tgt_config.json.fyO 00:04:44.624 + exit 0 00:04:44.624 09:06:45 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:44.624 09:06:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.624 INFO: changing configuration and checking if this can be detected... 00:04:44.624 09:06:45 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.624 09:06:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.883 09:06:45 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.883 09:06:45 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:44.883 09:06:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.883 + '[' 2 -ne 2 ']' 00:04:44.883 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:44.883 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:44.883 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.883 +++ basename /dev/fd/62 00:04:44.883 ++ mktemp /tmp/62.XXX 00:04:44.883 + tmp_file_1=/tmp/62.7sm 00:04:44.883 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.883 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.883 + tmp_file_2=/tmp/spdk_tgt_config.json.GeP 00:04:44.883 + ret=0 00:04:44.883 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:45.143 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:45.143 + diff -u /tmp/62.7sm /tmp/spdk_tgt_config.json.GeP 00:04:45.143 + ret=1 00:04:45.143 + echo '=== Start of file: /tmp/62.7sm ===' 00:04:45.143 + cat /tmp/62.7sm 00:04:45.143 + echo '=== End of file: /tmp/62.7sm ===' 00:04:45.143 + echo '' 00:04:45.143 + echo '=== Start of file: /tmp/spdk_tgt_config.json.GeP ===' 00:04:45.143 + cat /tmp/spdk_tgt_config.json.GeP 00:04:45.143 + echo '=== End of file: /tmp/spdk_tgt_config.json.GeP ===' 00:04:45.143 + echo '' 00:04:45.143 + rm /tmp/62.7sm /tmp/spdk_tgt_config.json.GeP 00:04:45.143 + exit 1 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:45.143 INFO: configuration change detected. 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:45.143 09:06:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.143 09:06:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@324 -- # [[ -n 921761 ]] 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:45.143 09:06:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.143 09:06:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:45.143 09:06:46 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:45.143 09:06:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.143 09:06:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.402 09:06:46 json_config -- json_config/json_config.sh@330 -- # killprocess 921761 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@952 -- # '[' -z 921761 ']' 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@956 -- # kill -0 921761 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@957 -- # uname 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 921761 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 921761' 00:04:45.402 killing process with pid 921761 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@971 -- # kill 921761 00:04:45.402 09:06:46 json_config -- common/autotest_common.sh@976 -- # wait 921761 00:04:46.782 09:06:47 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.782 09:06:47 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:46.782 09:06:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.782 09:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.782 09:06:47 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:46.782 09:06:47 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:46.782 INFO: Success 00:04:46.782 00:04:46.783 real 0m15.913s 00:04:46.783 user 0m16.547s 00:04:46.783 sys 0m2.604s 00:04:46.783 09:06:47 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.783 09:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.783 ************************************ 00:04:46.783 END TEST json_config 00:04:46.783 ************************************ 00:04:46.783 09:06:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:46.783 09:06:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.783 09:06:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.783 09:06:47 -- common/autotest_common.sh@10 -- # set +x 00:04:47.042 ************************************ 00:04:47.042 START TEST json_config_extra_key 00:04:47.042 ************************************ 00:04:47.042 09:06:47 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.043 09:06:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.043 --rc genhtml_branch_coverage=1 00:04:47.043 --rc genhtml_function_coverage=1 00:04:47.043 --rc genhtml_legend=1 00:04:47.043 --rc geninfo_all_blocks=1 00:04:47.043 --rc geninfo_unexecuted_blocks=1 00:04:47.043 00:04:47.043 ' 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.043 --rc genhtml_branch_coverage=1 00:04:47.043 --rc genhtml_function_coverage=1 00:04:47.043 --rc genhtml_legend=1 00:04:47.043 --rc geninfo_all_blocks=1 00:04:47.043 --rc geninfo_unexecuted_blocks=1 00:04:47.043 00:04:47.043 ' 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.043 --rc genhtml_branch_coverage=1 00:04:47.043 --rc genhtml_function_coverage=1 00:04:47.043 --rc genhtml_legend=1 00:04:47.043 --rc geninfo_all_blocks=1 00:04:47.043 --rc geninfo_unexecuted_blocks=1 00:04:47.043 00:04:47.043 ' 00:04:47.043 09:06:47 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.043 --rc genhtml_branch_coverage=1 00:04:47.043 --rc genhtml_function_coverage=1 00:04:47.043 --rc genhtml_legend=1 00:04:47.043 --rc geninfo_all_blocks=1 00:04:47.043 --rc geninfo_unexecuted_blocks=1 00:04:47.043 00:04:47.043 ' 00:04:47.043 09:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.043 09:06:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.043 09:06:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.043 09:06:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.043 09:06:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.043 09:06:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.043 09:06:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.043 09:06:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.043 09:06:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.043 09:06:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.043 09:06:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.043 09:06:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.043 09:06:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.043 INFO: launching applications... 00:04:47.043 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.043 09:06:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.043 09:06:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=923092 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.044 Waiting for target to run... 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 923092 /var/tmp/spdk_tgt.sock 00:04:47.044 09:06:48 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 923092 ']' 00:04:47.044 09:06:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.044 09:06:48 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.044 09:06:48 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.044 09:06:48 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.044 09:06:48 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.044 09:06:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.044 [2024-11-19 09:06:48.085344] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:47.044 [2024-11-19 09:06:48.085399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923092 ] 00:04:47.613 [2024-11-19 09:06:48.541300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.613 [2024-11-19 09:06:48.599257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.872 09:06:48 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.872 09:06:48 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.872 00:04:47.872 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.872 INFO: shutting down applications... 00:04:47.872 09:06:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 923092 ]] 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 923092 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 923092 00:04:47.872 09:06:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 923092 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.440 09:06:49 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.440 SPDK target shutdown done 00:04:48.440 09:06:49 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.440 Success 00:04:48.440 00:04:48.440 real 0m1.588s 00:04:48.440 user 0m1.222s 00:04:48.440 sys 0m0.568s 00:04:48.440 09:06:49 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.440 09:06:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.440 ************************************ 00:04:48.440 END TEST json_config_extra_key 00:04:48.440 ************************************ 00:04:48.440 09:06:49 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.440 09:06:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.440 09:06:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.440 09:06:49 -- common/autotest_common.sh@10 -- # set +x 00:04:48.700 ************************************ 00:04:48.700 START TEST alias_rpc 00:04:48.700 ************************************ 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.700 * Looking for test storage... 00:04:48.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.700 09:06:49 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.700 --rc genhtml_branch_coverage=1 00:04:48.700 --rc genhtml_function_coverage=1 00:04:48.700 --rc genhtml_legend=1 00:04:48.700 --rc geninfo_all_blocks=1 00:04:48.700 --rc geninfo_unexecuted_blocks=1 00:04:48.700 00:04:48.700 ' 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.700 --rc genhtml_branch_coverage=1 00:04:48.700 --rc genhtml_function_coverage=1 00:04:48.700 --rc genhtml_legend=1 00:04:48.700 --rc geninfo_all_blocks=1 00:04:48.700 --rc geninfo_unexecuted_blocks=1 00:04:48.700 00:04:48.700 ' 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.700 --rc genhtml_branch_coverage=1 00:04:48.700 --rc genhtml_function_coverage=1 00:04:48.700 --rc genhtml_legend=1 00:04:48.700 --rc geninfo_all_blocks=1 00:04:48.700 --rc geninfo_unexecuted_blocks=1 00:04:48.700 00:04:48.700 ' 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.700 --rc genhtml_branch_coverage=1 00:04:48.700 --rc genhtml_function_coverage=1 00:04:48.700 --rc genhtml_legend=1 00:04:48.700 --rc geninfo_all_blocks=1 00:04:48.700 --rc geninfo_unexecuted_blocks=1 00:04:48.700 00:04:48.700 ' 00:04:48.700 09:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.700 09:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=923529 00:04:48.700 09:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 923529 00:04:48.700 09:06:49 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 923529 ']' 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.700 09:06:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.700 [2024-11-19 09:06:49.735474] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:48.700 [2024-11-19 09:06:49.735524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923529 ] 00:04:48.960 [2024-11-19 09:06:49.800384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.960 [2024-11-19 09:06:49.841063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.220 09:06:50 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.220 09:06:50 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:49.220 09:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:49.480 09:06:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 923529 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 923529 ']' 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 923529 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 923529 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 923529' 00:04:49.480 killing process with pid 923529 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@971 -- # kill 923529 00:04:49.480 09:06:50 alias_rpc -- common/autotest_common.sh@976 -- # wait 923529 00:04:49.740 00:04:49.740 real 0m1.146s 00:04:49.740 user 0m1.194s 00:04:49.740 sys 0m0.394s 00:04:49.740 09:06:50 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.740 09:06:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.740 ************************************ 00:04:49.740 END TEST alias_rpc 00:04:49.740 ************************************ 00:04:49.740 09:06:50 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:49.740 09:06:50 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:49.740 09:06:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.740 09:06:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.740 09:06:50 -- common/autotest_common.sh@10 -- # set +x 00:04:49.740 ************************************ 00:04:49.740 START TEST spdkcli_tcp 00:04:49.740 ************************************ 00:04:49.740 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.000 * Looking for test storage... 00:04:50.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.000 09:06:50 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.000 --rc genhtml_branch_coverage=1 00:04:50.000 --rc genhtml_function_coverage=1 00:04:50.000 --rc genhtml_legend=1 00:04:50.000 --rc geninfo_all_blocks=1 00:04:50.000 --rc geninfo_unexecuted_blocks=1 00:04:50.000 00:04:50.000 ' 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.000 --rc genhtml_branch_coverage=1 00:04:50.000 --rc genhtml_function_coverage=1 00:04:50.000 --rc genhtml_legend=1 00:04:50.000 --rc geninfo_all_blocks=1 00:04:50.000 --rc geninfo_unexecuted_blocks=1 00:04:50.000 00:04:50.000 ' 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.000 --rc genhtml_branch_coverage=1 00:04:50.000 --rc genhtml_function_coverage=1 00:04:50.000 --rc genhtml_legend=1 00:04:50.000 --rc geninfo_all_blocks=1 00:04:50.000 --rc geninfo_unexecuted_blocks=1 00:04:50.000 00:04:50.000 ' 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.000 --rc genhtml_branch_coverage=1 00:04:50.000 --rc genhtml_function_coverage=1 00:04:50.000 --rc genhtml_legend=1 00:04:50.000 --rc geninfo_all_blocks=1 00:04:50.000 --rc geninfo_unexecuted_blocks=1 00:04:50.000 00:04:50.000 ' 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=923725 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 923725 00:04:50.000 09:06:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 923725 ']' 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.000 09:06:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.000 [2024-11-19 09:06:50.951273] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:50.000 [2024-11-19 09:06:50.951325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923725 ] 00:04:50.000 [2024-11-19 09:06:51.025822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.260 [2024-11-19 09:06:51.069561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.260 [2024-11-19 09:06:51.069562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.260 09:06:51 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.260 09:06:51 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:50.260 09:06:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=923844 00:04:50.260 09:06:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.260 09:06:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:50.520 [ 00:04:50.520 "bdev_malloc_delete", 00:04:50.520 "bdev_malloc_create", 00:04:50.520 "bdev_null_resize", 00:04:50.520 "bdev_null_delete", 00:04:50.520 "bdev_null_create", 00:04:50.520 "bdev_nvme_cuse_unregister", 00:04:50.520 "bdev_nvme_cuse_register", 00:04:50.520 "bdev_opal_new_user", 00:04:50.520 "bdev_opal_set_lock_state", 00:04:50.520 "bdev_opal_delete", 00:04:50.520 "bdev_opal_get_info", 00:04:50.520 "bdev_opal_create", 00:04:50.520 "bdev_nvme_opal_revert", 00:04:50.520 "bdev_nvme_opal_init", 00:04:50.520 "bdev_nvme_send_cmd", 00:04:50.520 "bdev_nvme_set_keys", 00:04:50.520 "bdev_nvme_get_path_iostat", 00:04:50.520 "bdev_nvme_get_mdns_discovery_info", 00:04:50.520 "bdev_nvme_stop_mdns_discovery", 00:04:50.520 "bdev_nvme_start_mdns_discovery", 00:04:50.520 "bdev_nvme_set_multipath_policy", 00:04:50.520 "bdev_nvme_set_preferred_path", 00:04:50.520 "bdev_nvme_get_io_paths", 00:04:50.520 "bdev_nvme_remove_error_injection", 00:04:50.520 "bdev_nvme_add_error_injection", 00:04:50.520 "bdev_nvme_get_discovery_info", 00:04:50.520 "bdev_nvme_stop_discovery", 00:04:50.520 "bdev_nvme_start_discovery", 00:04:50.520 "bdev_nvme_get_controller_health_info", 00:04:50.520 "bdev_nvme_disable_controller", 00:04:50.520 "bdev_nvme_enable_controller", 00:04:50.520 "bdev_nvme_reset_controller", 00:04:50.520 "bdev_nvme_get_transport_statistics", 00:04:50.520 "bdev_nvme_apply_firmware", 00:04:50.520 "bdev_nvme_detach_controller", 00:04:50.520 "bdev_nvme_get_controllers", 00:04:50.520 "bdev_nvme_attach_controller", 00:04:50.520 "bdev_nvme_set_hotplug", 00:04:50.520 "bdev_nvme_set_options", 00:04:50.520 "bdev_passthru_delete", 00:04:50.520 "bdev_passthru_create", 00:04:50.520 "bdev_lvol_set_parent_bdev", 00:04:50.520 "bdev_lvol_set_parent", 00:04:50.520 "bdev_lvol_check_shallow_copy", 00:04:50.520 "bdev_lvol_start_shallow_copy", 00:04:50.520 "bdev_lvol_grow_lvstore", 00:04:50.520 "bdev_lvol_get_lvols", 00:04:50.520 "bdev_lvol_get_lvstores", 00:04:50.520 "bdev_lvol_delete", 00:04:50.520 "bdev_lvol_set_read_only", 00:04:50.520 "bdev_lvol_resize", 00:04:50.520 "bdev_lvol_decouple_parent", 00:04:50.520 "bdev_lvol_inflate", 00:04:50.520 "bdev_lvol_rename", 00:04:50.520 "bdev_lvol_clone_bdev", 00:04:50.520 "bdev_lvol_clone", 00:04:50.520 "bdev_lvol_snapshot", 00:04:50.520 "bdev_lvol_create", 00:04:50.520 "bdev_lvol_delete_lvstore", 00:04:50.520 "bdev_lvol_rename_lvstore", 00:04:50.520 "bdev_lvol_create_lvstore", 00:04:50.520 "bdev_raid_set_options", 00:04:50.520 "bdev_raid_remove_base_bdev", 00:04:50.520 "bdev_raid_add_base_bdev", 00:04:50.520 "bdev_raid_delete", 00:04:50.520 "bdev_raid_create", 00:04:50.520 "bdev_raid_get_bdevs", 00:04:50.520 "bdev_error_inject_error", 00:04:50.520 "bdev_error_delete", 00:04:50.520 "bdev_error_create", 00:04:50.520 "bdev_split_delete", 00:04:50.520 "bdev_split_create", 00:04:50.520 "bdev_delay_delete", 00:04:50.520 "bdev_delay_create", 00:04:50.520 "bdev_delay_update_latency", 00:04:50.520 "bdev_zone_block_delete", 00:04:50.520 "bdev_zone_block_create", 00:04:50.520 "blobfs_create", 00:04:50.520 "blobfs_detect", 00:04:50.520 "blobfs_set_cache_size", 00:04:50.520 "bdev_aio_delete", 00:04:50.520 "bdev_aio_rescan", 00:04:50.520 "bdev_aio_create", 00:04:50.520 "bdev_ftl_set_property", 00:04:50.520 "bdev_ftl_get_properties", 00:04:50.520 "bdev_ftl_get_stats", 00:04:50.520 "bdev_ftl_unmap", 00:04:50.520 "bdev_ftl_unload", 00:04:50.520 "bdev_ftl_delete", 00:04:50.520 "bdev_ftl_load", 00:04:50.520 "bdev_ftl_create", 00:04:50.520 "bdev_virtio_attach_controller", 00:04:50.520 "bdev_virtio_scsi_get_devices", 00:04:50.520 "bdev_virtio_detach_controller", 00:04:50.520 "bdev_virtio_blk_set_hotplug", 00:04:50.520 "bdev_iscsi_delete", 00:04:50.520 "bdev_iscsi_create", 00:04:50.520 "bdev_iscsi_set_options", 00:04:50.520 "accel_error_inject_error", 00:04:50.520 "ioat_scan_accel_module", 00:04:50.520 "dsa_scan_accel_module", 00:04:50.520 "iaa_scan_accel_module", 00:04:50.520 "vfu_virtio_create_fs_endpoint", 00:04:50.520 "vfu_virtio_create_scsi_endpoint", 00:04:50.520 "vfu_virtio_scsi_remove_target", 00:04:50.520 "vfu_virtio_scsi_add_target", 00:04:50.520 "vfu_virtio_create_blk_endpoint", 00:04:50.520 "vfu_virtio_delete_endpoint", 00:04:50.520 "keyring_file_remove_key", 00:04:50.520 "keyring_file_add_key", 00:04:50.520 "keyring_linux_set_options", 00:04:50.520 "fsdev_aio_delete", 00:04:50.520 "fsdev_aio_create", 00:04:50.520 "iscsi_get_histogram", 00:04:50.520 "iscsi_enable_histogram", 00:04:50.520 "iscsi_set_options", 00:04:50.520 "iscsi_get_auth_groups", 00:04:50.520 "iscsi_auth_group_remove_secret", 00:04:50.520 "iscsi_auth_group_add_secret", 00:04:50.520 "iscsi_delete_auth_group", 00:04:50.520 "iscsi_create_auth_group", 00:04:50.520 "iscsi_set_discovery_auth", 00:04:50.520 "iscsi_get_options", 00:04:50.520 "iscsi_target_node_request_logout", 00:04:50.520 "iscsi_target_node_set_redirect", 00:04:50.520 "iscsi_target_node_set_auth", 00:04:50.520 "iscsi_target_node_add_lun", 00:04:50.520 "iscsi_get_stats", 00:04:50.520 "iscsi_get_connections", 00:04:50.520 "iscsi_portal_group_set_auth", 00:04:50.520 "iscsi_start_portal_group", 00:04:50.520 "iscsi_delete_portal_group", 00:04:50.520 "iscsi_create_portal_group", 00:04:50.520 "iscsi_get_portal_groups", 00:04:50.520 "iscsi_delete_target_node", 00:04:50.520 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.520 "iscsi_target_node_add_pg_ig_maps", 00:04:50.520 "iscsi_create_target_node", 00:04:50.520 "iscsi_get_target_nodes", 00:04:50.520 "iscsi_delete_initiator_group", 00:04:50.520 "iscsi_initiator_group_remove_initiators", 00:04:50.520 "iscsi_initiator_group_add_initiators", 00:04:50.520 "iscsi_create_initiator_group", 00:04:50.520 "iscsi_get_initiator_groups", 00:04:50.520 "nvmf_set_crdt", 00:04:50.520 "nvmf_set_config", 00:04:50.520 "nvmf_set_max_subsystems", 00:04:50.520 "nvmf_stop_mdns_prr", 00:04:50.520 "nvmf_publish_mdns_prr", 00:04:50.520 "nvmf_subsystem_get_listeners", 00:04:50.520 "nvmf_subsystem_get_qpairs", 00:04:50.520 "nvmf_subsystem_get_controllers", 00:04:50.520 "nvmf_get_stats", 00:04:50.520 "nvmf_get_transports", 00:04:50.520 "nvmf_create_transport", 00:04:50.520 "nvmf_get_targets", 00:04:50.520 "nvmf_delete_target", 00:04:50.520 "nvmf_create_target", 00:04:50.520 "nvmf_subsystem_allow_any_host", 00:04:50.520 "nvmf_subsystem_set_keys", 00:04:50.520 "nvmf_discovery_referral_remove_host", 00:04:50.520 "nvmf_discovery_referral_add_host", 00:04:50.520 "nvmf_subsystem_remove_host", 00:04:50.520 "nvmf_subsystem_add_host", 00:04:50.520 "nvmf_ns_remove_host", 00:04:50.520 "nvmf_ns_add_host", 00:04:50.520 "nvmf_subsystem_remove_ns", 00:04:50.520 "nvmf_subsystem_set_ns_ana_group", 00:04:50.520 "nvmf_subsystem_add_ns", 00:04:50.520 "nvmf_subsystem_listener_set_ana_state", 00:04:50.520 "nvmf_discovery_get_referrals", 00:04:50.520 "nvmf_discovery_remove_referral", 00:04:50.521 "nvmf_discovery_add_referral", 00:04:50.521 "nvmf_subsystem_remove_listener", 00:04:50.521 "nvmf_subsystem_add_listener", 00:04:50.521 "nvmf_delete_subsystem", 00:04:50.521 "nvmf_create_subsystem", 00:04:50.521 "nvmf_get_subsystems", 00:04:50.521 "env_dpdk_get_mem_stats", 00:04:50.521 "nbd_get_disks", 00:04:50.521 "nbd_stop_disk", 00:04:50.521 "nbd_start_disk", 00:04:50.521 "ublk_recover_disk", 00:04:50.521 "ublk_get_disks", 00:04:50.521 "ublk_stop_disk", 00:04:50.521 "ublk_start_disk", 00:04:50.521 "ublk_destroy_target", 00:04:50.521 "ublk_create_target", 00:04:50.521 "virtio_blk_create_transport", 00:04:50.521 "virtio_blk_get_transports", 00:04:50.521 "vhost_controller_set_coalescing", 00:04:50.521 "vhost_get_controllers", 00:04:50.521 "vhost_delete_controller", 00:04:50.521 "vhost_create_blk_controller", 00:04:50.521 "vhost_scsi_controller_remove_target", 00:04:50.521 "vhost_scsi_controller_add_target", 00:04:50.521 "vhost_start_scsi_controller", 00:04:50.521 "vhost_create_scsi_controller", 00:04:50.521 "thread_set_cpumask", 00:04:50.521 "scheduler_set_options", 00:04:50.521 "framework_get_governor", 00:04:50.521 "framework_get_scheduler", 00:04:50.521 "framework_set_scheduler", 00:04:50.521 "framework_get_reactors", 00:04:50.521 "thread_get_io_channels", 00:04:50.521 "thread_get_pollers", 00:04:50.521 "thread_get_stats", 00:04:50.521 "framework_monitor_context_switch", 00:04:50.521 "spdk_kill_instance", 00:04:50.521 "log_enable_timestamps", 00:04:50.521 "log_get_flags", 00:04:50.521 "log_clear_flag", 00:04:50.521 "log_set_flag", 00:04:50.521 "log_get_level", 00:04:50.521 "log_set_level", 00:04:50.521 "log_get_print_level", 00:04:50.521 "log_set_print_level", 00:04:50.521 "framework_enable_cpumask_locks", 00:04:50.521 "framework_disable_cpumask_locks", 00:04:50.521 "framework_wait_init", 00:04:50.521 "framework_start_init", 00:04:50.521 "scsi_get_devices", 00:04:50.521 "bdev_get_histogram", 00:04:50.521 "bdev_enable_histogram", 00:04:50.521 "bdev_set_qos_limit", 00:04:50.521 "bdev_set_qd_sampling_period", 00:04:50.521 "bdev_get_bdevs", 00:04:50.521 "bdev_reset_iostat", 00:04:50.521 "bdev_get_iostat", 00:04:50.521 "bdev_examine", 00:04:50.521 "bdev_wait_for_examine", 00:04:50.521 "bdev_set_options", 00:04:50.521 "accel_get_stats", 00:04:50.521 "accel_set_options", 00:04:50.521 "accel_set_driver", 00:04:50.521 "accel_crypto_key_destroy", 00:04:50.521 "accel_crypto_keys_get", 00:04:50.521 "accel_crypto_key_create", 00:04:50.521 "accel_assign_opc", 00:04:50.521 "accel_get_module_info", 00:04:50.521 "accel_get_opc_assignments", 00:04:50.521 "vmd_rescan", 00:04:50.521 "vmd_remove_device", 00:04:50.521 "vmd_enable", 00:04:50.521 "sock_get_default_impl", 00:04:50.521 "sock_set_default_impl", 00:04:50.521 "sock_impl_set_options", 00:04:50.521 "sock_impl_get_options", 00:04:50.521 "iobuf_get_stats", 00:04:50.521 "iobuf_set_options", 00:04:50.521 "keyring_get_keys", 00:04:50.521 "vfu_tgt_set_base_path", 00:04:50.521 "framework_get_pci_devices", 00:04:50.521 "framework_get_config", 00:04:50.521 "framework_get_subsystems", 00:04:50.521 "fsdev_set_opts", 00:04:50.521 "fsdev_get_opts", 00:04:50.521 "trace_get_info", 00:04:50.521 "trace_get_tpoint_group_mask", 00:04:50.521 "trace_disable_tpoint_group", 00:04:50.521 "trace_enable_tpoint_group", 00:04:50.521 "trace_clear_tpoint_mask", 00:04:50.521 "trace_set_tpoint_mask", 00:04:50.521 "notify_get_notifications", 00:04:50.521 "notify_get_types", 00:04:50.521 "spdk_get_version", 00:04:50.521 "rpc_get_methods" 00:04:50.521 ] 00:04:50.521 09:06:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.521 09:06:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.521 09:06:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 923725 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 923725 ']' 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 923725 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 923725 00:04:50.521 09:06:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:50.781 09:06:51 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:50.781 09:06:51 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 923725' 00:04:50.781 killing process with pid 923725 00:04:50.781 09:06:51 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 923725 00:04:50.781 09:06:51 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 923725 00:04:51.041 00:04:51.041 real 0m1.163s 00:04:51.041 user 0m1.956s 00:04:51.041 sys 0m0.462s 00:04:51.041 09:06:51 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.041 09:06:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.041 ************************************ 00:04:51.041 END TEST spdkcli_tcp 00:04:51.041 ************************************ 00:04:51.041 09:06:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.041 09:06:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.041 09:06:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.041 09:06:51 -- common/autotest_common.sh@10 -- # set +x 00:04:51.041 ************************************ 00:04:51.041 START TEST dpdk_mem_utility 00:04:51.041 ************************************ 00:04:51.041 09:06:51 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.041 * Looking for test storage... 00:04:51.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:51.041 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.041 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:51.041 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.300 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:51.300 09:06:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.300 09:06:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.300 09:06:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.300 09:06:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.300 09:06:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.301 09:06:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:51.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.301 --rc genhtml_branch_coverage=1 00:04:51.301 --rc genhtml_function_coverage=1 00:04:51.301 --rc genhtml_legend=1 00:04:51.301 --rc geninfo_all_blocks=1 00:04:51.301 --rc geninfo_unexecuted_blocks=1 00:04:51.301 00:04:51.301 ' 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:51.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.301 --rc genhtml_branch_coverage=1 00:04:51.301 --rc genhtml_function_coverage=1 00:04:51.301 --rc genhtml_legend=1 00:04:51.301 --rc geninfo_all_blocks=1 00:04:51.301 --rc geninfo_unexecuted_blocks=1 00:04:51.301 00:04:51.301 ' 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:51.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.301 --rc genhtml_branch_coverage=1 00:04:51.301 --rc genhtml_function_coverage=1 00:04:51.301 --rc genhtml_legend=1 00:04:51.301 --rc geninfo_all_blocks=1 00:04:51.301 --rc geninfo_unexecuted_blocks=1 00:04:51.301 00:04:51.301 ' 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:51.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.301 --rc genhtml_branch_coverage=1 00:04:51.301 --rc genhtml_function_coverage=1 00:04:51.301 --rc genhtml_legend=1 00:04:51.301 --rc geninfo_all_blocks=1 00:04:51.301 --rc geninfo_unexecuted_blocks=1 00:04:51.301 00:04:51.301 ' 00:04:51.301 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.301 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=923942 00:04:51.301 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 923942 00:04:51.301 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 923942 ']' 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:51.301 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.301 [2024-11-19 09:06:52.179910] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:51.301 [2024-11-19 09:06:52.179969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid923942 ] 00:04:51.301 [2024-11-19 09:06:52.255832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.301 [2024-11-19 09:06:52.298417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.560 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.560 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:51.560 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.561 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.561 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.561 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.561 { 00:04:51.561 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.561 } 00:04:51.561 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.561 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:51.561 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:51.561 1 heaps totaling size 810.000000 MiB 00:04:51.561 size: 810.000000 MiB heap id: 0 00:04:51.561 end heaps---------- 00:04:51.561 9 mempools totaling size 595.772034 MiB 00:04:51.561 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.561 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.561 size: 92.545471 MiB name: bdev_io_923942 00:04:51.561 size: 50.003479 MiB name: msgpool_923942 00:04:51.561 size: 36.509338 MiB name: fsdev_io_923942 00:04:51.561 size: 21.763794 MiB name: PDU_Pool 00:04:51.561 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.561 size: 4.133484 MiB name: evtpool_923942 00:04:51.561 size: 0.026123 MiB name: Session_Pool 00:04:51.561 end mempools------- 00:04:51.561 6 memzones totaling size 4.142822 MiB 00:04:51.561 size: 1.000366 MiB name: RG_ring_0_923942 00:04:51.561 size: 1.000366 MiB name: RG_ring_1_923942 00:04:51.561 size: 1.000366 MiB name: RG_ring_4_923942 00:04:51.561 size: 1.000366 MiB name: RG_ring_5_923942 00:04:51.561 size: 0.125366 MiB name: RG_ring_2_923942 00:04:51.561 size: 0.015991 MiB name: RG_ring_3_923942 00:04:51.561 end memzones------- 00:04:51.561 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.820 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:51.820 list of free elements. size: 10.862488 MiB 00:04:51.820 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:51.820 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:51.820 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:51.820 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:51.820 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:51.820 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:51.821 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:51.821 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:51.821 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:51.821 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:51.821 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:51.821 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:51.821 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:51.821 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:51.821 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:51.821 list of standard malloc elements. size: 199.218628 MiB 00:04:51.821 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:51.821 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:51.821 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:51.821 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:51.821 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:51.821 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:51.821 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:51.821 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:51.821 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:51.821 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:51.821 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:51.821 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:51.821 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:51.821 list of memzone associated elements. size: 599.918884 MiB 00:04:51.821 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:51.821 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.821 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:51.821 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.821 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:51.821 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_923942_0 00:04:51.821 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:51.821 associated memzone info: size: 48.002930 MiB name: MP_msgpool_923942_0 00:04:51.821 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:51.821 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_923942_0 00:04:51.821 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:51.821 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.821 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:51.821 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.821 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:51.821 associated memzone info: size: 3.000122 MiB name: MP_evtpool_923942_0 00:04:51.821 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:51.821 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_923942 00:04:51.821 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:51.821 associated memzone info: size: 1.007996 MiB name: MP_evtpool_923942 00:04:51.821 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:51.821 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.821 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:51.821 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.821 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:51.821 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.821 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:51.821 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.821 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:51.821 associated memzone info: size: 1.000366 MiB name: RG_ring_0_923942 00:04:51.821 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:51.821 associated memzone info: size: 1.000366 MiB name: RG_ring_1_923942 00:04:51.821 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:51.821 associated memzone info: size: 1.000366 MiB name: RG_ring_4_923942 00:04:51.821 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:51.821 associated memzone info: size: 1.000366 MiB name: RG_ring_5_923942 00:04:51.821 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:51.821 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_923942 00:04:51.821 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:51.821 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_923942 00:04:51.821 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:51.821 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.821 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:51.821 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.821 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:51.821 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.821 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:51.821 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_923942 00:04:51.821 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:51.821 associated memzone info: size: 0.125366 MiB name: RG_ring_2_923942 00:04:51.821 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:51.821 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.821 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:51.821 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.821 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:51.821 associated memzone info: size: 0.015991 MiB name: RG_ring_3_923942 00:04:51.821 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:51.821 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.821 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:51.821 associated memzone info: size: 0.000183 MiB name: MP_msgpool_923942 00:04:51.821 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:51.821 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_923942 00:04:51.821 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:51.821 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_923942 00:04:51.821 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:51.821 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.821 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.821 09:06:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 923942 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 923942 ']' 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 923942 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 923942 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 923942' 00:04:51.821 killing process with pid 923942 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 923942 00:04:51.821 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 923942 00:04:52.081 00:04:52.081 real 0m1.036s 00:04:52.081 user 0m0.961s 00:04:52.081 sys 0m0.415s 00:04:52.081 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.081 09:06:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.081 ************************************ 00:04:52.081 END TEST dpdk_mem_utility 00:04:52.081 ************************************ 00:04:52.081 09:06:53 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:52.081 09:06:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.081 09:06:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.081 09:06:53 -- common/autotest_common.sh@10 -- # set +x 00:04:52.081 ************************************ 00:04:52.081 START TEST event 00:04:52.081 ************************************ 00:04:52.081 09:06:53 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:52.341 * Looking for test storage... 00:04:52.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.341 09:06:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.341 09:06:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.341 09:06:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.341 09:06:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.341 09:06:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.341 09:06:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.341 09:06:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.341 09:06:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.341 09:06:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.341 09:06:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.341 09:06:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.341 09:06:53 event -- scripts/common.sh@344 -- # case "$op" in 00:04:52.341 09:06:53 event -- scripts/common.sh@345 -- # : 1 00:04:52.341 09:06:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.341 09:06:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.341 09:06:53 event -- scripts/common.sh@365 -- # decimal 1 00:04:52.341 09:06:53 event -- scripts/common.sh@353 -- # local d=1 00:04:52.341 09:06:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.341 09:06:53 event -- scripts/common.sh@355 -- # echo 1 00:04:52.341 09:06:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.341 09:06:53 event -- scripts/common.sh@366 -- # decimal 2 00:04:52.341 09:06:53 event -- scripts/common.sh@353 -- # local d=2 00:04:52.341 09:06:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.341 09:06:53 event -- scripts/common.sh@355 -- # echo 2 00:04:52.341 09:06:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.341 09:06:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.341 09:06:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.341 09:06:53 event -- scripts/common.sh@368 -- # return 0 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.341 --rc genhtml_branch_coverage=1 00:04:52.341 --rc genhtml_function_coverage=1 00:04:52.341 --rc genhtml_legend=1 00:04:52.341 --rc geninfo_all_blocks=1 00:04:52.341 --rc geninfo_unexecuted_blocks=1 00:04:52.341 00:04:52.341 ' 00:04:52.341 09:06:53 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.341 --rc genhtml_branch_coverage=1 00:04:52.341 --rc genhtml_function_coverage=1 00:04:52.341 --rc genhtml_legend=1 00:04:52.342 --rc geninfo_all_blocks=1 00:04:52.342 --rc geninfo_unexecuted_blocks=1 00:04:52.342 00:04:52.342 ' 00:04:52.342 09:06:53 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.342 --rc genhtml_branch_coverage=1 00:04:52.342 --rc genhtml_function_coverage=1 00:04:52.342 --rc genhtml_legend=1 00:04:52.342 --rc geninfo_all_blocks=1 00:04:52.342 --rc geninfo_unexecuted_blocks=1 00:04:52.342 00:04:52.342 ' 00:04:52.342 09:06:53 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.342 --rc genhtml_branch_coverage=1 00:04:52.342 --rc genhtml_function_coverage=1 00:04:52.342 --rc genhtml_legend=1 00:04:52.342 --rc geninfo_all_blocks=1 00:04:52.342 --rc geninfo_unexecuted_blocks=1 00:04:52.342 00:04:52.342 ' 00:04:52.342 09:06:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:52.342 09:06:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:52.342 09:06:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.342 09:06:53 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:52.342 09:06:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.342 09:06:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.342 ************************************ 00:04:52.342 START TEST event_perf 00:04:52.342 ************************************ 00:04:52.342 09:06:53 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:52.342 Running I/O for 1 seconds...[2024-11-19 09:06:53.287844] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:52.342 [2024-11-19 09:06:53.287922] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924221 ] 00:04:52.342 [2024-11-19 09:06:53.366251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.601 [2024-11-19 09:06:53.411548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.601 [2024-11-19 09:06:53.411657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.601 [2024-11-19 09:06:53.411761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.601 [2024-11-19 09:06:53.411762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.541 Running I/O for 1 seconds... 00:04:53.541 lcore 0: 204192 00:04:53.541 lcore 1: 204190 00:04:53.541 lcore 2: 204191 00:04:53.541 lcore 3: 204190 00:04:53.541 done. 00:04:53.541 00:04:53.541 real 0m1.182s 00:04:53.541 user 0m4.098s 00:04:53.541 sys 0m0.080s 00:04:53.541 09:06:54 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.541 09:06:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.541 ************************************ 00:04:53.541 END TEST event_perf 00:04:53.541 ************************************ 00:04:53.541 09:06:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.541 09:06:54 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:53.541 09:06:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.541 09:06:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.541 ************************************ 00:04:53.541 START TEST event_reactor 00:04:53.541 ************************************ 00:04:53.541 09:06:54 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.541 [2024-11-19 09:06:54.546167] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:53.541 [2024-11-19 09:06:54.546235] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924472 ] 00:04:53.800 [2024-11-19 09:06:54.625003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.800 [2024-11-19 09:06:54.666477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.738 test_start 00:04:54.738 oneshot 00:04:54.738 tick 100 00:04:54.738 tick 100 00:04:54.738 tick 250 00:04:54.738 tick 100 00:04:54.738 tick 100 00:04:54.738 tick 250 00:04:54.738 tick 100 00:04:54.738 tick 500 00:04:54.738 tick 100 00:04:54.738 tick 100 00:04:54.738 tick 250 00:04:54.738 tick 100 00:04:54.738 tick 100 00:04:54.738 test_end 00:04:54.738 00:04:54.738 real 0m1.179s 00:04:54.738 user 0m1.098s 00:04:54.738 sys 0m0.078s 00:04:54.738 09:06:55 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.738 09:06:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:54.738 ************************************ 00:04:54.738 END TEST event_reactor 00:04:54.738 ************************************ 00:04:54.738 09:06:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.738 09:06:55 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:54.738 09:06:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.738 09:06:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.738 ************************************ 00:04:54.738 START TEST event_reactor_perf 00:04:54.738 ************************************ 00:04:54.738 09:06:55 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.997 [2024-11-19 09:06:55.798238] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:54.997 [2024-11-19 09:06:55.798299] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid924718 ] 00:04:54.997 [2024-11-19 09:06:55.876018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.997 [2024-11-19 09:06:55.917724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.935 test_start 00:04:55.935 test_end 00:04:55.935 Performance: 492842 events per second 00:04:55.935 00:04:55.935 real 0m1.182s 00:04:55.935 user 0m1.101s 00:04:55.935 sys 0m0.076s 00:04:55.935 09:06:56 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.935 09:06:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.935 ************************************ 00:04:55.935 END TEST event_reactor_perf 00:04:55.935 ************************************ 00:04:56.195 09:06:56 event -- event/event.sh@49 -- # uname -s 00:04:56.195 09:06:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:56.195 09:06:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.195 09:06:56 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.195 09:06:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.195 09:06:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.195 ************************************ 00:04:56.195 START TEST event_scheduler 00:04:56.195 ************************************ 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:56.195 * Looking for test storage... 00:04:56.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.195 09:06:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.195 --rc genhtml_branch_coverage=1 00:04:56.195 --rc genhtml_function_coverage=1 00:04:56.195 --rc genhtml_legend=1 00:04:56.195 --rc geninfo_all_blocks=1 00:04:56.195 --rc geninfo_unexecuted_blocks=1 00:04:56.195 00:04:56.195 ' 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.195 --rc genhtml_branch_coverage=1 00:04:56.195 --rc genhtml_function_coverage=1 00:04:56.195 --rc genhtml_legend=1 00:04:56.195 --rc geninfo_all_blocks=1 00:04:56.195 --rc geninfo_unexecuted_blocks=1 00:04:56.195 00:04:56.195 ' 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.195 --rc genhtml_branch_coverage=1 00:04:56.195 --rc genhtml_function_coverage=1 00:04:56.195 --rc genhtml_legend=1 00:04:56.195 --rc geninfo_all_blocks=1 00:04:56.195 --rc geninfo_unexecuted_blocks=1 00:04:56.195 00:04:56.195 ' 00:04:56.195 09:06:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.195 --rc genhtml_branch_coverage=1 00:04:56.195 --rc genhtml_function_coverage=1 00:04:56.195 --rc genhtml_legend=1 00:04:56.195 --rc geninfo_all_blocks=1 00:04:56.195 --rc geninfo_unexecuted_blocks=1 00:04:56.195 00:04:56.195 ' 00:04:56.195 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:56.195 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=925020 00:04:56.196 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.196 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:56.196 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 925020 00:04:56.196 09:06:57 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 925020 ']' 00:04:56.196 09:06:57 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.196 09:06:57 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.196 09:06:57 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.196 09:06:57 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.196 09:06:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.196 [2024-11-19 09:06:57.246194] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:56.196 [2024-11-19 09:06:57.246237] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925020 ] 00:04:56.454 [2024-11-19 09:06:57.304055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.454 [2024-11-19 09:06:57.351240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.454 [2024-11-19 09:06:57.351348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.454 [2024-11-19 09:06:57.351458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.454 [2024-11-19 09:06:57.351459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.454 09:06:57 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.454 09:06:57 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:56.454 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.454 09:06:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.454 09:06:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.455 [2024-11-19 09:06:57.412079] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:56.455 [2024-11-19 09:06:57.412096] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.455 [2024-11-19 09:06:57.412105] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.455 [2024-11-19 09:06:57.412111] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.455 [2024-11-19 09:06:57.412116] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.455 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.455 [2024-11-19 09:06:57.486178] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.455 09:06:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.455 09:06:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 ************************************ 00:04:56.714 START TEST scheduler_create_thread 00:04:56.714 ************************************ 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 2 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 3 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 4 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 5 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 6 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 7 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 8 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 9 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 10 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.714 09:06:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.092 09:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.092 09:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:58.092 09:06:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:58.092 09:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.092 09:06:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.469 09:07:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.469 00:04:59.469 real 0m2.620s 00:04:59.469 user 0m0.026s 00:04:59.469 sys 0m0.004s 00:04:59.469 09:07:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.469 09:07:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.469 ************************************ 00:04:59.469 END TEST scheduler_create_thread 00:04:59.469 ************************************ 00:04:59.469 09:07:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:59.469 09:07:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 925020 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 925020 ']' 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 925020 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 925020 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 925020' 00:04:59.469 killing process with pid 925020 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 925020 00:04:59.469 09:07:00 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 925020 00:04:59.728 [2024-11-19 09:07:00.620602] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:59.987 00:04:59.987 real 0m3.760s 00:04:59.987 user 0m5.680s 00:04:59.987 sys 0m0.361s 00:04:59.987 09:07:00 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.987 09:07:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.987 ************************************ 00:04:59.987 END TEST event_scheduler 00:04:59.987 ************************************ 00:04:59.987 09:07:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:59.987 09:07:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:59.987 09:07:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.987 09:07:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.987 09:07:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.987 ************************************ 00:04:59.987 START TEST app_repeat 00:04:59.987 ************************************ 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=925752 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 925752' 00:04:59.987 Process app_repeat pid: 925752 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:59.987 spdk_app_start Round 0 00:04:59.987 09:07:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 925752 /var/tmp/spdk-nbd.sock 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 925752 ']' 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.987 09:07:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.987 [2024-11-19 09:07:00.901381] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:04:59.987 [2024-11-19 09:07:00.901436] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925752 ] 00:04:59.987 [2024-11-19 09:07:00.979794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.987 [2024-11-19 09:07:01.020844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.987 [2024-11-19 09:07:01.020845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.245 09:07:01 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.245 09:07:01 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:00.245 09:07:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.245 Malloc0 00:05:00.503 09:07:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.503 Malloc1 00:05:00.503 09:07:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.503 09:07:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:00.762 /dev/nbd0 00:05:00.762 09:07:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:00.762 09:07:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.762 1+0 records in 00:05:00.762 1+0 records out 00:05:00.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234832 s, 17.4 MB/s 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:00.762 09:07:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:00.762 09:07:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.762 09:07:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.762 09:07:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.020 /dev/nbd1 00:05:01.020 09:07:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.021 09:07:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.021 1+0 records in 00:05:01.021 1+0 records out 00:05:01.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208417 s, 19.7 MB/s 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:01.021 09:07:02 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:01.021 09:07:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.021 09:07:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.021 09:07:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.021 09:07:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.021 09:07:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.279 { 00:05:01.279 "nbd_device": "/dev/nbd0", 00:05:01.279 "bdev_name": "Malloc0" 00:05:01.279 }, 00:05:01.279 { 00:05:01.279 "nbd_device": "/dev/nbd1", 00:05:01.279 "bdev_name": "Malloc1" 00:05:01.279 } 00:05:01.279 ]' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.279 { 00:05:01.279 "nbd_device": "/dev/nbd0", 00:05:01.279 "bdev_name": "Malloc0" 00:05:01.279 }, 00:05:01.279 { 00:05:01.279 "nbd_device": "/dev/nbd1", 00:05:01.279 "bdev_name": "Malloc1" 00:05:01.279 } 00:05:01.279 ]' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.279 /dev/nbd1' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.279 /dev/nbd1' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.279 256+0 records in 00:05:01.279 256+0 records out 00:05:01.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104685 s, 100 MB/s 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.279 256+0 records in 00:05:01.279 256+0 records out 00:05:01.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141317 s, 74.2 MB/s 00:05:01.279 09:07:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.280 256+0 records in 00:05:01.280 256+0 records out 00:05:01.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152151 s, 68.9 MB/s 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.280 09:07:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.539 09:07:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.798 09:07:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.057 09:07:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.057 09:07:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.057 09:07:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.057 09:07:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.057 09:07:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.315 09:07:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.574 [2024-11-19 09:07:03.386359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.574 [2024-11-19 09:07:03.424669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.574 [2024-11-19 09:07:03.424670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.574 [2024-11-19 09:07:03.465944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.574 [2024-11-19 09:07:03.466005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.861 09:07:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.861 09:07:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:05.861 spdk_app_start Round 1 00:05:05.861 09:07:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 925752 /var/tmp/spdk-nbd.sock 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 925752 ']' 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.861 09:07:06 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:05.861 09:07:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.861 Malloc0 00:05:05.861 09:07:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.861 Malloc1 00:05:05.861 09:07:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.861 09:07:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.120 /dev/nbd0 00:05:06.120 09:07:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.120 09:07:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.120 1+0 records in 00:05:06.120 1+0 records out 00:05:06.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203101 s, 20.2 MB/s 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:06.120 09:07:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:06.120 09:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.120 09:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.120 09:07:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.379 /dev/nbd1 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.379 1+0 records in 00:05:06.379 1+0 records out 00:05:06.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253414 s, 16.2 MB/s 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:06.379 09:07:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.379 09:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.637 { 00:05:06.637 "nbd_device": "/dev/nbd0", 00:05:06.637 "bdev_name": "Malloc0" 00:05:06.637 }, 00:05:06.637 { 00:05:06.637 "nbd_device": "/dev/nbd1", 00:05:06.637 "bdev_name": "Malloc1" 00:05:06.637 } 00:05:06.637 ]' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.637 { 00:05:06.637 "nbd_device": "/dev/nbd0", 00:05:06.637 "bdev_name": "Malloc0" 00:05:06.637 }, 00:05:06.637 { 00:05:06.637 "nbd_device": "/dev/nbd1", 00:05:06.637 "bdev_name": "Malloc1" 00:05:06.637 } 00:05:06.637 ]' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.637 /dev/nbd1' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.637 /dev/nbd1' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.637 256+0 records in 00:05:06.637 256+0 records out 00:05:06.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101454 s, 103 MB/s 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.637 256+0 records in 00:05:06.637 256+0 records out 00:05:06.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014247 s, 73.6 MB/s 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.637 256+0 records in 00:05:06.637 256+0 records out 00:05:06.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153765 s, 68.2 MB/s 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.637 09:07:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.895 09:07:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.896 09:07:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.896 09:07:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.896 09:07:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.154 09:07:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.414 09:07:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.414 09:07:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.673 09:07:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.673 [2024-11-19 09:07:08.699473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.932 [2024-11-19 09:07:08.738150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.932 [2024-11-19 09:07:08.738150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.932 [2024-11-19 09:07:08.780032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.932 [2024-11-19 09:07:08.780072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.221 09:07:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.221 09:07:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:11.221 spdk_app_start Round 2 00:05:11.221 09:07:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 925752 /var/tmp/spdk-nbd.sock 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 925752 ']' 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.221 09:07:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:11.221 09:07:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.221 Malloc0 00:05:11.221 09:07:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.221 Malloc1 00:05:11.221 09:07:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.221 09:07:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.480 /dev/nbd0 00:05:11.480 09:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.480 09:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.480 1+0 records in 00:05:11.480 1+0 records out 00:05:11.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225768 s, 18.1 MB/s 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.480 09:07:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.480 09:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.480 09:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.480 09:07:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.739 /dev/nbd1 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.739 1+0 records in 00:05:11.739 1+0 records out 00:05:11.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017778 s, 23.0 MB/s 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:11.739 09:07:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.739 09:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.999 { 00:05:11.999 "nbd_device": "/dev/nbd0", 00:05:11.999 "bdev_name": "Malloc0" 00:05:11.999 }, 00:05:11.999 { 00:05:11.999 "nbd_device": "/dev/nbd1", 00:05:11.999 "bdev_name": "Malloc1" 00:05:11.999 } 00:05:11.999 ]' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.999 { 00:05:11.999 "nbd_device": "/dev/nbd0", 00:05:11.999 "bdev_name": "Malloc0" 00:05:11.999 }, 00:05:11.999 { 00:05:11.999 "nbd_device": "/dev/nbd1", 00:05:11.999 "bdev_name": "Malloc1" 00:05:11.999 } 00:05:11.999 ]' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.999 /dev/nbd1' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.999 /dev/nbd1' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.999 256+0 records in 00:05:11.999 256+0 records out 00:05:11.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100647 s, 104 MB/s 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.999 256+0 records in 00:05:11.999 256+0 records out 00:05:11.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138208 s, 75.9 MB/s 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.999 256+0 records in 00:05:11.999 256+0 records out 00:05:11.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152794 s, 68.6 MB/s 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.999 09:07:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.258 09:07:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.517 09:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.517 09:07:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.517 09:07:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.518 09:07:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.777 09:07:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.777 09:07:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.036 09:07:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.036 [2024-11-19 09:07:14.048797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.036 [2024-11-19 09:07:14.086102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.036 [2024-11-19 09:07:14.086103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.295 [2024-11-19 09:07:14.127528] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.295 [2024-11-19 09:07:14.127570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.584 09:07:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 925752 /var/tmp/spdk-nbd.sock 00:05:16.584 09:07:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 925752 ']' 00:05:16.584 09:07:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.584 09:07:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.584 09:07:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.584 09:07:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.584 09:07:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:16.584 09:07:17 event.app_repeat -- event/event.sh@39 -- # killprocess 925752 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 925752 ']' 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 925752 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 925752 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 925752' 00:05:16.584 killing process with pid 925752 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@971 -- # kill 925752 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@976 -- # wait 925752 00:05:16.584 spdk_app_start is called in Round 0. 00:05:16.584 Shutdown signal received, stop current app iteration 00:05:16.584 Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 reinitialization... 00:05:16.584 spdk_app_start is called in Round 1. 00:05:16.584 Shutdown signal received, stop current app iteration 00:05:16.584 Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 reinitialization... 00:05:16.584 spdk_app_start is called in Round 2. 00:05:16.584 Shutdown signal received, stop current app iteration 00:05:16.584 Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 reinitialization... 00:05:16.584 spdk_app_start is called in Round 3. 00:05:16.584 Shutdown signal received, stop current app iteration 00:05:16.584 09:07:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.584 09:07:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.584 00:05:16.584 real 0m16.446s 00:05:16.584 user 0m36.251s 00:05:16.584 sys 0m2.486s 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.584 09:07:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.584 ************************************ 00:05:16.584 END TEST app_repeat 00:05:16.584 ************************************ 00:05:16.584 09:07:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.584 09:07:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.584 09:07:17 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.584 09:07:17 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.584 09:07:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.584 ************************************ 00:05:16.584 START TEST cpu_locks 00:05:16.584 ************************************ 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:16.584 * Looking for test storage... 00:05:16.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.584 09:07:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.584 --rc genhtml_branch_coverage=1 00:05:16.584 --rc genhtml_function_coverage=1 00:05:16.584 --rc genhtml_legend=1 00:05:16.584 --rc geninfo_all_blocks=1 00:05:16.584 --rc geninfo_unexecuted_blocks=1 00:05:16.584 00:05:16.584 ' 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.584 --rc genhtml_branch_coverage=1 00:05:16.584 --rc genhtml_function_coverage=1 00:05:16.584 --rc genhtml_legend=1 00:05:16.584 --rc geninfo_all_blocks=1 00:05:16.584 --rc geninfo_unexecuted_blocks=1 00:05:16.584 00:05:16.584 ' 00:05:16.584 09:07:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.585 --rc genhtml_branch_coverage=1 00:05:16.585 --rc genhtml_function_coverage=1 00:05:16.585 --rc genhtml_legend=1 00:05:16.585 --rc geninfo_all_blocks=1 00:05:16.585 --rc geninfo_unexecuted_blocks=1 00:05:16.585 00:05:16.585 ' 00:05:16.585 09:07:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.585 --rc genhtml_branch_coverage=1 00:05:16.585 --rc genhtml_function_coverage=1 00:05:16.585 --rc genhtml_legend=1 00:05:16.585 --rc geninfo_all_blocks=1 00:05:16.585 --rc geninfo_unexecuted_blocks=1 00:05:16.585 00:05:16.585 ' 00:05:16.585 09:07:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.585 09:07:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.585 09:07:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.585 09:07:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.585 09:07:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.585 09:07:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.585 09:07:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.585 ************************************ 00:05:16.585 START TEST default_locks 00:05:16.585 ************************************ 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=928760 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 928760 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 928760 ']' 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.585 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.844 [2024-11-19 09:07:17.645866] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:16.844 [2024-11-19 09:07:17.645909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928760 ] 00:05:16.844 [2024-11-19 09:07:17.721587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.844 [2024-11-19 09:07:17.761943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.103 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.103 09:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:17.103 09:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 928760 00:05:17.103 09:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 928760 00:05:17.103 09:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.670 lslocks: write error 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 928760 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 928760 ']' 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 928760 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 928760 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 928760' 00:05:17.670 killing process with pid 928760 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 928760 00:05:17.670 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 928760 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 928760 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 928760 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 928760 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 928760 ']' 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (928760) - No such process 00:05:17.929 ERROR: process (pid: 928760) is no longer running 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.929 09:07:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.929 00:05:17.929 real 0m1.198s 00:05:17.929 user 0m1.152s 00:05:17.929 sys 0m0.549s 00:05:17.930 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.930 09:07:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 ************************************ 00:05:17.930 END TEST default_locks 00:05:17.930 ************************************ 00:05:17.930 09:07:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:17.930 09:07:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.930 09:07:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.930 09:07:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 ************************************ 00:05:17.930 START TEST default_locks_via_rpc 00:05:17.930 ************************************ 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=929018 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 929018 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 929018 ']' 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.930 09:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.930 [2024-11-19 09:07:18.912154] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:17.930 [2024-11-19 09:07:18.912199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929018 ] 00:05:17.930 [2024-11-19 09:07:18.981840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.188 [2024-11-19 09:07:19.020895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.188 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.188 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.188 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:18.188 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.188 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 929018 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 929018 00:05:18.447 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 929018 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 929018 ']' 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 929018 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929018 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929018' 00:05:18.705 killing process with pid 929018 00:05:18.705 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 929018 00:05:18.706 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 929018 00:05:18.963 00:05:18.963 real 0m1.077s 00:05:18.963 user 0m1.031s 00:05:18.963 sys 0m0.507s 00:05:18.963 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.963 09:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.963 ************************************ 00:05:18.963 END TEST default_locks_via_rpc 00:05:18.963 ************************************ 00:05:18.963 09:07:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:18.963 09:07:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.963 09:07:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.963 09:07:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.963 ************************************ 00:05:18.963 START TEST non_locking_app_on_locked_coremask 00:05:18.963 ************************************ 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=929272 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 929272 /var/tmp/spdk.sock 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 929272 ']' 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.963 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.222 [2024-11-19 09:07:20.064402] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:19.222 [2024-11-19 09:07:20.064465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929272 ] 00:05:19.222 [2024-11-19 09:07:20.140253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.222 [2024-11-19 09:07:20.181103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=929278 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 929278 /var/tmp/spdk2.sock 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 929278 ']' 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.481 09:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.481 [2024-11-19 09:07:20.457400] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:19.481 [2024-11-19 09:07:20.457447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929278 ] 00:05:19.740 [2024-11-19 09:07:20.546504] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.740 [2024-11-19 09:07:20.546535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.740 [2024-11-19 09:07:20.627601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.307 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.307 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:20.307 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 929272 00:05:20.307 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 929272 00:05:20.307 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.874 lslocks: write error 00:05:20.874 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 929272 00:05:20.874 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 929272 ']' 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 929272 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929272 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929272' 00:05:20.875 killing process with pid 929272 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 929272 00:05:20.875 09:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 929272 00:05:21.443 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 929278 00:05:21.443 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 929278 ']' 00:05:21.443 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 929278 00:05:21.443 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:21.443 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:21.443 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929278 00:05:21.702 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:21.702 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:21.702 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929278' 00:05:21.702 killing process with pid 929278 00:05:21.702 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 929278 00:05:21.702 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 929278 00:05:21.961 00:05:21.961 real 0m2.817s 00:05:21.961 user 0m2.958s 00:05:21.961 sys 0m0.948s 00:05:21.961 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.961 09:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.961 ************************************ 00:05:21.961 END TEST non_locking_app_on_locked_coremask 00:05:21.961 ************************************ 00:05:21.961 09:07:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:21.961 09:07:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.961 09:07:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.961 09:07:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.961 ************************************ 00:05:21.961 START TEST locking_app_on_unlocked_coremask 00:05:21.961 ************************************ 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=929772 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 929772 /var/tmp/spdk.sock 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 929772 ']' 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.961 09:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.961 [2024-11-19 09:07:22.953784] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:21.961 [2024-11-19 09:07:22.953830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929772 ] 00:05:22.220 [2024-11-19 09:07:23.028646] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.220 [2024-11-19 09:07:23.028671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.220 [2024-11-19 09:07:23.066211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=929783 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 929783 /var/tmp/spdk2.sock 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 929783 ']' 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.479 09:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.479 [2024-11-19 09:07:23.322842] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:22.479 [2024-11-19 09:07:23.322891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929783 ] 00:05:22.479 [2024-11-19 09:07:23.414690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.479 [2024-11-19 09:07:23.495263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.414 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.414 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:23.414 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 929783 00:05:23.414 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.414 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 929783 00:05:23.673 lslocks: write error 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 929772 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 929772 ']' 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 929772 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929772 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929772' 00:05:23.673 killing process with pid 929772 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 929772 00:05:23.673 09:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 929772 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 929783 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 929783 ']' 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 929783 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 929783 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 929783' 00:05:24.609 killing process with pid 929783 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 929783 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 929783 00:05:24.609 00:05:24.609 real 0m2.766s 00:05:24.609 user 0m2.915s 00:05:24.609 sys 0m0.907s 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:24.609 09:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.609 ************************************ 00:05:24.609 END TEST locking_app_on_unlocked_coremask 00:05:24.609 ************************************ 00:05:24.868 09:07:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:24.868 09:07:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.868 09:07:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.868 09:07:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.868 ************************************ 00:05:24.868 START TEST locking_app_on_locked_coremask 00:05:24.868 ************************************ 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=930278 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 930278 /var/tmp/spdk.sock 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 930278 ']' 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:24.868 09:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.868 [2024-11-19 09:07:25.786018] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:24.868 [2024-11-19 09:07:25.786062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930278 ] 00:05:24.868 [2024-11-19 09:07:25.859240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.868 [2024-11-19 09:07:25.901502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=930281 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 930281 /var/tmp/spdk2.sock 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 930281 /var/tmp/spdk2.sock 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 930281 /var/tmp/spdk2.sock 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 930281 ']' 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.127 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.127 [2024-11-19 09:07:26.164263] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:25.127 [2024-11-19 09:07:26.164311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930281 ] 00:05:25.386 [2024-11-19 09:07:26.249818] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 930278 has claimed it. 00:05:25.386 [2024-11-19 09:07:26.249849] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (930281) - No such process 00:05:25.953 ERROR: process (pid: 930281) is no longer running 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 930278 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 930278 00:05:25.953 09:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.212 lslocks: write error 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 930278 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 930278 ']' 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 930278 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 930278 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 930278' 00:05:26.212 killing process with pid 930278 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 930278 00:05:26.212 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 930278 00:05:26.472 00:05:26.472 real 0m1.770s 00:05:26.472 user 0m1.919s 00:05:26.472 sys 0m0.589s 00:05:26.472 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.472 09:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.472 ************************************ 00:05:26.472 END TEST locking_app_on_locked_coremask 00:05:26.472 ************************************ 00:05:26.732 09:07:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:26.732 09:07:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.732 09:07:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.732 09:07:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.732 ************************************ 00:05:26.732 START TEST locking_overlapped_coremask 00:05:26.732 ************************************ 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=930543 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 930543 /var/tmp/spdk.sock 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 930543 ']' 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.732 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.732 [2024-11-19 09:07:27.619803] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:26.732 [2024-11-19 09:07:27.619841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930543 ] 00:05:26.732 [2024-11-19 09:07:27.695559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.732 [2024-11-19 09:07:27.740978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.732 [2024-11-19 09:07:27.741038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.732 [2024-11-19 09:07:27.741038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=930625 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 930625 /var/tmp/spdk2.sock 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 930625 /var/tmp/spdk2.sock 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 930625 /var/tmp/spdk2.sock 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 930625 ']' 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.992 09:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.992 [2024-11-19 09:07:28.002290] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:26.992 [2024-11-19 09:07:28.002337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930625 ] 00:05:27.251 [2024-11-19 09:07:28.091617] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 930543 has claimed it. 00:05:27.251 [2024-11-19 09:07:28.091648] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (930625) - No such process 00:05:27.819 ERROR: process (pid: 930625) is no longer running 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 930543 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 930543 ']' 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 930543 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 930543 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 930543' 00:05:27.819 killing process with pid 930543 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 930543 00:05:27.819 09:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 930543 00:05:28.077 00:05:28.077 real 0m1.433s 00:05:28.077 user 0m3.962s 00:05:28.077 sys 0m0.392s 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.077 ************************************ 00:05:28.077 END TEST locking_overlapped_coremask 00:05:28.077 ************************************ 00:05:28.077 09:07:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.077 09:07:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.077 09:07:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.077 09:07:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.077 ************************************ 00:05:28.077 START TEST locking_overlapped_coremask_via_rpc 00:05:28.077 ************************************ 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=930814 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 930814 /var/tmp/spdk.sock 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 930814 ']' 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.077 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.077 [2024-11-19 09:07:29.122255] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:28.077 [2024-11-19 09:07:29.122295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930814 ] 00:05:28.336 [2024-11-19 09:07:29.194595] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.336 [2024-11-19 09:07:29.194621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.336 [2024-11-19 09:07:29.239535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.336 [2024-11-19 09:07:29.239642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.336 [2024-11-19 09:07:29.239643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=930955 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 930955 /var/tmp/spdk2.sock 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 930955 ']' 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.594 09:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.594 [2024-11-19 09:07:29.501130] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:28.594 [2024-11-19 09:07:29.501181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930955 ] 00:05:28.594 [2024-11-19 09:07:29.594910] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.594 [2024-11-19 09:07:29.594941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.852 [2024-11-19 09:07:29.690336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.852 [2024-11-19 09:07:29.690451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.852 [2024-11-19 09:07:29.690452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.419 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.420 [2024-11-19 09:07:30.373025] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 930814 has claimed it. 00:05:29.420 request: 00:05:29.420 { 00:05:29.420 "method": "framework_enable_cpumask_locks", 00:05:29.420 "req_id": 1 00:05:29.420 } 00:05:29.420 Got JSON-RPC error response 00:05:29.420 response: 00:05:29.420 { 00:05:29.420 "code": -32603, 00:05:29.420 "message": "Failed to claim CPU core: 2" 00:05:29.420 } 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 930814 /var/tmp/spdk.sock 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 930814 ']' 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.420 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 930955 /var/tmp/spdk2.sock 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 930955 ']' 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.679 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.938 00:05:29.938 real 0m1.729s 00:05:29.938 user 0m0.853s 00:05:29.938 sys 0m0.123s 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.938 09:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.938 ************************************ 00:05:29.938 END TEST locking_overlapped_coremask_via_rpc 00:05:29.938 ************************************ 00:05:29.938 09:07:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:29.938 09:07:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 930814 ]] 00:05:29.938 09:07:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 930814 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 930814 ']' 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 930814 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 930814 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 930814' 00:05:29.938 killing process with pid 930814 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 930814 00:05:29.938 09:07:30 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 930814 00:05:30.197 09:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 930955 ]] 00:05:30.197 09:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 930955 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 930955 ']' 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 930955 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 930955 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 930955' 00:05:30.197 killing process with pid 930955 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 930955 00:05:30.197 09:07:31 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 930955 00:05:30.765 09:07:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:30.766 09:07:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:30.766 09:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 930814 ]] 00:05:30.766 09:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 930814 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 930814 ']' 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 930814 00:05:30.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (930814) - No such process 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 930814 is not found' 00:05:30.766 Process with pid 930814 is not found 00:05:30.766 09:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 930955 ]] 00:05:30.766 09:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 930955 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 930955 ']' 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 930955 00:05:30.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (930955) - No such process 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 930955 is not found' 00:05:30.766 Process with pid 930955 is not found 00:05:30.766 09:07:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:30.766 00:05:30.766 real 0m14.188s 00:05:30.766 user 0m24.650s 00:05:30.766 sys 0m4.972s 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.766 09:07:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.766 ************************************ 00:05:30.766 END TEST cpu_locks 00:05:30.766 ************************************ 00:05:30.766 00:05:30.766 real 0m38.545s 00:05:30.766 user 1m13.144s 00:05:30.766 sys 0m8.434s 00:05:30.766 09:07:31 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.766 09:07:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.766 ************************************ 00:05:30.766 END TEST event 00:05:30.766 ************************************ 00:05:30.766 09:07:31 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:30.766 09:07:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.766 09:07:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.766 09:07:31 -- common/autotest_common.sh@10 -- # set +x 00:05:30.766 ************************************ 00:05:30.766 START TEST thread 00:05:30.766 ************************************ 00:05:30.766 09:07:31 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:30.766 * Looking for test storage... 00:05:30.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:30.766 09:07:31 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:30.766 09:07:31 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:30.766 09:07:31 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.025 09:07:31 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.025 09:07:31 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.025 09:07:31 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.025 09:07:31 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.025 09:07:31 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.025 09:07:31 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.025 09:07:31 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.025 09:07:31 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.025 09:07:31 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.025 09:07:31 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.025 09:07:31 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.025 09:07:31 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.025 09:07:31 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:31.025 09:07:31 thread -- scripts/common.sh@345 -- # : 1 00:05:31.025 09:07:31 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.025 09:07:31 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.025 09:07:31 thread -- scripts/common.sh@365 -- # decimal 1 00:05:31.025 09:07:31 thread -- scripts/common.sh@353 -- # local d=1 00:05:31.025 09:07:31 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.025 09:07:31 thread -- scripts/common.sh@355 -- # echo 1 00:05:31.025 09:07:31 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.025 09:07:31 thread -- scripts/common.sh@366 -- # decimal 2 00:05:31.025 09:07:31 thread -- scripts/common.sh@353 -- # local d=2 00:05:31.025 09:07:31 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.025 09:07:31 thread -- scripts/common.sh@355 -- # echo 2 00:05:31.025 09:07:31 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.025 09:07:31 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.025 09:07:31 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.025 09:07:31 thread -- scripts/common.sh@368 -- # return 0 00:05:31.025 09:07:31 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.025 09:07:31 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.025 --rc genhtml_branch_coverage=1 00:05:31.025 --rc genhtml_function_coverage=1 00:05:31.025 --rc genhtml_legend=1 00:05:31.026 --rc geninfo_all_blocks=1 00:05:31.026 --rc geninfo_unexecuted_blocks=1 00:05:31.026 00:05:31.026 ' 00:05:31.026 09:07:31 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.026 --rc genhtml_branch_coverage=1 00:05:31.026 --rc genhtml_function_coverage=1 00:05:31.026 --rc genhtml_legend=1 00:05:31.026 --rc geninfo_all_blocks=1 00:05:31.026 --rc geninfo_unexecuted_blocks=1 00:05:31.026 00:05:31.026 ' 00:05:31.026 09:07:31 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.026 --rc genhtml_branch_coverage=1 00:05:31.026 --rc genhtml_function_coverage=1 00:05:31.026 --rc genhtml_legend=1 00:05:31.026 --rc geninfo_all_blocks=1 00:05:31.026 --rc geninfo_unexecuted_blocks=1 00:05:31.026 00:05:31.026 ' 00:05:31.026 09:07:31 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.026 --rc genhtml_branch_coverage=1 00:05:31.026 --rc genhtml_function_coverage=1 00:05:31.026 --rc genhtml_legend=1 00:05:31.026 --rc geninfo_all_blocks=1 00:05:31.026 --rc geninfo_unexecuted_blocks=1 00:05:31.026 00:05:31.026 ' 00:05:31.026 09:07:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.026 09:07:31 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:31.026 09:07:31 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.026 09:07:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.026 ************************************ 00:05:31.026 START TEST thread_poller_perf 00:05:31.026 ************************************ 00:05:31.026 09:07:31 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.026 [2024-11-19 09:07:31.908325] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:31.026 [2024-11-19 09:07:31.908381] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931388 ] 00:05:31.026 [2024-11-19 09:07:31.984877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.026 [2024-11-19 09:07:32.025368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.026 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:32.403 [2024-11-19T08:07:33.462Z] ====================================== 00:05:32.403 [2024-11-19T08:07:33.462Z] busy:2309018160 (cyc) 00:05:32.403 [2024-11-19T08:07:33.462Z] total_run_count: 411000 00:05:32.403 [2024-11-19T08:07:33.462Z] tsc_hz: 2300000000 (cyc) 00:05:32.403 [2024-11-19T08:07:33.462Z] ====================================== 00:05:32.403 [2024-11-19T08:07:33.462Z] poller_cost: 5618 (cyc), 2442 (nsec) 00:05:32.403 00:05:32.403 real 0m1.183s 00:05:32.403 user 0m1.105s 00:05:32.403 sys 0m0.074s 00:05:32.403 09:07:33 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:32.403 09:07:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.403 ************************************ 00:05:32.403 END TEST thread_poller_perf 00:05:32.403 ************************************ 00:05:32.403 09:07:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.403 09:07:33 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:32.403 09:07:33 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:32.403 09:07:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.403 ************************************ 00:05:32.403 START TEST thread_poller_perf 00:05:32.403 ************************************ 00:05:32.403 09:07:33 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.403 [2024-11-19 09:07:33.159190] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:32.403 [2024-11-19 09:07:33.159259] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931637 ] 00:05:32.403 [2024-11-19 09:07:33.236148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.403 [2024-11-19 09:07:33.275917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.403 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:33.340 [2024-11-19T08:07:34.399Z] ====================================== 00:05:33.340 [2024-11-19T08:07:34.399Z] busy:2301348702 (cyc) 00:05:33.340 [2024-11-19T08:07:34.399Z] total_run_count: 5383000 00:05:33.340 [2024-11-19T08:07:34.399Z] tsc_hz: 2300000000 (cyc) 00:05:33.340 [2024-11-19T08:07:34.399Z] ====================================== 00:05:33.340 [2024-11-19T08:07:34.399Z] poller_cost: 427 (cyc), 185 (nsec) 00:05:33.340 00:05:33.340 real 0m1.177s 00:05:33.340 user 0m1.107s 00:05:33.340 sys 0m0.066s 00:05:33.340 09:07:34 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.340 09:07:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.340 ************************************ 00:05:33.340 END TEST thread_poller_perf 00:05:33.340 ************************************ 00:05:33.340 09:07:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:33.340 00:05:33.340 real 0m2.674s 00:05:33.340 user 0m2.376s 00:05:33.340 sys 0m0.313s 00:05:33.340 09:07:34 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.340 09:07:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.340 ************************************ 00:05:33.340 END TEST thread 00:05:33.340 ************************************ 00:05:33.340 09:07:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:33.340 09:07:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:33.340 09:07:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.340 09:07:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.340 09:07:34 -- common/autotest_common.sh@10 -- # set +x 00:05:33.600 ************************************ 00:05:33.600 START TEST app_cmdline 00:05:33.601 ************************************ 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:33.601 * Looking for test storage... 00:05:33.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.601 09:07:34 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.601 --rc genhtml_branch_coverage=1 00:05:33.601 --rc genhtml_function_coverage=1 00:05:33.601 --rc genhtml_legend=1 00:05:33.601 --rc geninfo_all_blocks=1 00:05:33.601 --rc geninfo_unexecuted_blocks=1 00:05:33.601 00:05:33.601 ' 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.601 --rc genhtml_branch_coverage=1 00:05:33.601 --rc genhtml_function_coverage=1 00:05:33.601 --rc genhtml_legend=1 00:05:33.601 --rc geninfo_all_blocks=1 00:05:33.601 --rc geninfo_unexecuted_blocks=1 00:05:33.601 00:05:33.601 ' 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.601 --rc genhtml_branch_coverage=1 00:05:33.601 --rc genhtml_function_coverage=1 00:05:33.601 --rc genhtml_legend=1 00:05:33.601 --rc geninfo_all_blocks=1 00:05:33.601 --rc geninfo_unexecuted_blocks=1 00:05:33.601 00:05:33.601 ' 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.601 --rc genhtml_branch_coverage=1 00:05:33.601 --rc genhtml_function_coverage=1 00:05:33.601 --rc genhtml_legend=1 00:05:33.601 --rc geninfo_all_blocks=1 00:05:33.601 --rc geninfo_unexecuted_blocks=1 00:05:33.601 00:05:33.601 ' 00:05:33.601 09:07:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:33.601 09:07:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=931930 00:05:33.601 09:07:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 931930 00:05:33.601 09:07:34 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 931930 ']' 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.601 09:07:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:33.601 [2024-11-19 09:07:34.646326] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:33.601 [2024-11-19 09:07:34.646372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931930 ] 00:05:33.861 [2024-11-19 09:07:34.722145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.861 [2024-11-19 09:07:34.765077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.121 09:07:34 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:34.121 09:07:34 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:34.121 09:07:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:34.121 { 00:05:34.121 "version": "SPDK v25.01-pre git sha1 a7ec5bc8e", 00:05:34.121 "fields": { 00:05:34.121 "major": 25, 00:05:34.121 "minor": 1, 00:05:34.121 "patch": 0, 00:05:34.121 "suffix": "-pre", 00:05:34.121 "commit": "a7ec5bc8e" 00:05:34.121 } 00:05:34.121 } 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:34.121 09:07:35 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.121 09:07:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:34.121 09:07:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:34.121 09:07:35 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.380 09:07:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:34.380 09:07:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:34.380 09:07:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.380 request: 00:05:34.380 { 00:05:34.380 "method": "env_dpdk_get_mem_stats", 00:05:34.380 "req_id": 1 00:05:34.380 } 00:05:34.380 Got JSON-RPC error response 00:05:34.380 response: 00:05:34.380 { 00:05:34.380 "code": -32601, 00:05:34.380 "message": "Method not found" 00:05:34.380 } 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.380 09:07:35 app_cmdline -- app/cmdline.sh@1 -- # killprocess 931930 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 931930 ']' 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 931930 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.380 09:07:35 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 931930 00:05:34.639 09:07:35 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.639 09:07:35 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.639 09:07:35 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 931930' 00:05:34.639 killing process with pid 931930 00:05:34.639 09:07:35 app_cmdline -- common/autotest_common.sh@971 -- # kill 931930 00:05:34.639 09:07:35 app_cmdline -- common/autotest_common.sh@976 -- # wait 931930 00:05:34.899 00:05:34.899 real 0m1.330s 00:05:34.899 user 0m1.562s 00:05:34.899 sys 0m0.442s 00:05:34.899 09:07:35 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.899 09:07:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:34.899 ************************************ 00:05:34.899 END TEST app_cmdline 00:05:34.899 ************************************ 00:05:34.899 09:07:35 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:34.899 09:07:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.899 09:07:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.899 09:07:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.899 ************************************ 00:05:34.899 START TEST version 00:05:34.899 ************************************ 00:05:34.899 09:07:35 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:34.899 * Looking for test storage... 00:05:34.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:34.899 09:07:35 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.899 09:07:35 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.899 09:07:35 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.159 09:07:35 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.159 09:07:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.159 09:07:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.159 09:07:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.159 09:07:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.159 09:07:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.159 09:07:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.159 09:07:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.159 09:07:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.159 09:07:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.159 09:07:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.159 09:07:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.159 09:07:35 version -- scripts/common.sh@344 -- # case "$op" in 00:05:35.159 09:07:35 version -- scripts/common.sh@345 -- # : 1 00:05:35.159 09:07:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.159 09:07:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.159 09:07:35 version -- scripts/common.sh@365 -- # decimal 1 00:05:35.159 09:07:35 version -- scripts/common.sh@353 -- # local d=1 00:05:35.159 09:07:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.159 09:07:35 version -- scripts/common.sh@355 -- # echo 1 00:05:35.159 09:07:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.159 09:07:35 version -- scripts/common.sh@366 -- # decimal 2 00:05:35.159 09:07:35 version -- scripts/common.sh@353 -- # local d=2 00:05:35.159 09:07:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.159 09:07:35 version -- scripts/common.sh@355 -- # echo 2 00:05:35.159 09:07:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.159 09:07:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.159 09:07:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.159 09:07:35 version -- scripts/common.sh@368 -- # return 0 00:05:35.159 09:07:35 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.159 09:07:35 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.159 --rc genhtml_branch_coverage=1 00:05:35.159 --rc genhtml_function_coverage=1 00:05:35.159 --rc genhtml_legend=1 00:05:35.159 --rc geninfo_all_blocks=1 00:05:35.159 --rc geninfo_unexecuted_blocks=1 00:05:35.159 00:05:35.159 ' 00:05:35.159 09:07:35 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.159 --rc genhtml_branch_coverage=1 00:05:35.159 --rc genhtml_function_coverage=1 00:05:35.159 --rc genhtml_legend=1 00:05:35.159 --rc geninfo_all_blocks=1 00:05:35.159 --rc geninfo_unexecuted_blocks=1 00:05:35.159 00:05:35.159 ' 00:05:35.159 09:07:35 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.159 --rc genhtml_branch_coverage=1 00:05:35.159 --rc genhtml_function_coverage=1 00:05:35.159 --rc genhtml_legend=1 00:05:35.159 --rc geninfo_all_blocks=1 00:05:35.159 --rc geninfo_unexecuted_blocks=1 00:05:35.159 00:05:35.159 ' 00:05:35.159 09:07:35 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.159 --rc genhtml_branch_coverage=1 00:05:35.159 --rc genhtml_function_coverage=1 00:05:35.159 --rc genhtml_legend=1 00:05:35.159 --rc geninfo_all_blocks=1 00:05:35.159 --rc geninfo_unexecuted_blocks=1 00:05:35.159 00:05:35.159 ' 00:05:35.159 09:07:35 version -- app/version.sh@17 -- # get_header_version major 00:05:35.159 09:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # cut -f2 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.159 09:07:36 version -- app/version.sh@17 -- # major=25 00:05:35.159 09:07:36 version -- app/version.sh@18 -- # get_header_version minor 00:05:35.159 09:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # cut -f2 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.159 09:07:36 version -- app/version.sh@18 -- # minor=1 00:05:35.159 09:07:36 version -- app/version.sh@19 -- # get_header_version patch 00:05:35.159 09:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # cut -f2 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.159 09:07:36 version -- app/version.sh@19 -- # patch=0 00:05:35.159 09:07:36 version -- app/version.sh@20 -- # get_header_version suffix 00:05:35.159 09:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # cut -f2 00:05:35.159 09:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:05:35.159 09:07:36 version -- app/version.sh@20 -- # suffix=-pre 00:05:35.159 09:07:36 version -- app/version.sh@22 -- # version=25.1 00:05:35.159 09:07:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:35.159 09:07:36 version -- app/version.sh@28 -- # version=25.1rc0 00:05:35.159 09:07:36 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:35.160 09:07:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:35.160 09:07:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:35.160 09:07:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:35.160 00:05:35.160 real 0m0.246s 00:05:35.160 user 0m0.158s 00:05:35.160 sys 0m0.130s 00:05:35.160 09:07:36 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.160 09:07:36 version -- common/autotest_common.sh@10 -- # set +x 00:05:35.160 ************************************ 00:05:35.160 END TEST version 00:05:35.160 ************************************ 00:05:35.160 09:07:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:35.160 09:07:36 -- spdk/autotest.sh@194 -- # uname -s 00:05:35.160 09:07:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:35.160 09:07:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.160 09:07:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.160 09:07:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:35.160 09:07:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.160 09:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.160 09:07:36 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:35.160 09:07:36 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:35.160 09:07:36 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:35.160 09:07:36 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:35.160 09:07:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.160 09:07:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.160 ************************************ 00:05:35.160 START TEST nvmf_tcp 00:05:35.160 ************************************ 00:05:35.160 09:07:36 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:35.420 * Looking for test storage... 00:05:35.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.420 09:07:36 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.420 --rc genhtml_branch_coverage=1 00:05:35.420 --rc genhtml_function_coverage=1 00:05:35.420 --rc genhtml_legend=1 00:05:35.420 --rc geninfo_all_blocks=1 00:05:35.420 --rc geninfo_unexecuted_blocks=1 00:05:35.420 00:05:35.420 ' 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.420 --rc genhtml_branch_coverage=1 00:05:35.420 --rc genhtml_function_coverage=1 00:05:35.420 --rc genhtml_legend=1 00:05:35.420 --rc geninfo_all_blocks=1 00:05:35.420 --rc geninfo_unexecuted_blocks=1 00:05:35.420 00:05:35.420 ' 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.420 --rc genhtml_branch_coverage=1 00:05:35.420 --rc genhtml_function_coverage=1 00:05:35.420 --rc genhtml_legend=1 00:05:35.420 --rc geninfo_all_blocks=1 00:05:35.420 --rc geninfo_unexecuted_blocks=1 00:05:35.420 00:05:35.420 ' 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.420 --rc genhtml_branch_coverage=1 00:05:35.420 --rc genhtml_function_coverage=1 00:05:35.420 --rc genhtml_legend=1 00:05:35.420 --rc geninfo_all_blocks=1 00:05:35.420 --rc geninfo_unexecuted_blocks=1 00:05:35.420 00:05:35.420 ' 00:05:35.420 09:07:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:35.420 09:07:36 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:35.420 09:07:36 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.420 09:07:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.420 ************************************ 00:05:35.420 START TEST nvmf_target_core 00:05:35.420 ************************************ 00:05:35.420 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:35.420 * Looking for test storage... 00:05:35.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.681 --rc genhtml_branch_coverage=1 00:05:35.681 --rc genhtml_function_coverage=1 00:05:35.681 --rc genhtml_legend=1 00:05:35.681 --rc geninfo_all_blocks=1 00:05:35.681 --rc geninfo_unexecuted_blocks=1 00:05:35.681 00:05:35.681 ' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.681 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:35.682 ************************************ 00:05:35.682 START TEST nvmf_abort 00:05:35.682 ************************************ 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:35.682 * Looking for test storage... 00:05:35.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.682 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:35.993 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.994 --rc genhtml_branch_coverage=1 00:05:35.994 --rc genhtml_function_coverage=1 00:05:35.994 --rc genhtml_legend=1 00:05:35.994 --rc geninfo_all_blocks=1 00:05:35.994 --rc geninfo_unexecuted_blocks=1 00:05:35.994 00:05:35.994 ' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.994 --rc genhtml_branch_coverage=1 00:05:35.994 --rc genhtml_function_coverage=1 00:05:35.994 --rc genhtml_legend=1 00:05:35.994 --rc geninfo_all_blocks=1 00:05:35.994 --rc geninfo_unexecuted_blocks=1 00:05:35.994 00:05:35.994 ' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.994 --rc genhtml_branch_coverage=1 00:05:35.994 --rc genhtml_function_coverage=1 00:05:35.994 --rc genhtml_legend=1 00:05:35.994 --rc geninfo_all_blocks=1 00:05:35.994 --rc geninfo_unexecuted_blocks=1 00:05:35.994 00:05:35.994 ' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.994 --rc genhtml_branch_coverage=1 00:05:35.994 --rc genhtml_function_coverage=1 00:05:35.994 --rc genhtml_legend=1 00:05:35.994 --rc geninfo_all_blocks=1 00:05:35.994 --rc geninfo_unexecuted_blocks=1 00:05:35.994 00:05:35.994 ' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:35.994 09:07:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:41.640 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:41.641 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:41.641 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:41.641 Found net devices under 0000:86:00.0: cvl_0_0 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:41.641 Found net devices under 0000:86:00.1: cvl_0_1 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:41.641 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:41.900 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:41.900 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:41.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:41.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:05:41.901 00:05:41.901 --- 10.0.0.2 ping statistics --- 00:05:41.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.901 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:41.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:41.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:05:41.901 00:05:41.901 --- 10.0.0.1 ping statistics --- 00:05:41.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.901 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=935628 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 935628 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 935628 ']' 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:41.901 09:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:41.901 [2024-11-19 09:07:42.925816] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:41.901 [2024-11-19 09:07:42.925864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:42.161 [2024-11-19 09:07:43.004361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.161 [2024-11-19 09:07:43.044665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:42.161 [2024-11-19 09:07:43.044702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:42.161 [2024-11-19 09:07:43.044709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:42.161 [2024-11-19 09:07:43.044715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:42.161 [2024-11-19 09:07:43.044720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:42.161 [2024-11-19 09:07:43.046189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.161 [2024-11-19 09:07:43.046274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.161 [2024-11-19 09:07:43.046275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.161 [2024-11-19 09:07:43.190593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.161 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.420 Malloc0 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.420 Delay0 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.420 [2024-11-19 09:07:43.264250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.420 09:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:42.420 [2024-11-19 09:07:43.433087] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:44.956 Initializing NVMe Controllers 00:05:44.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:44.956 controller IO queue size 128 less than required 00:05:44.956 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:44.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:44.956 Initialization complete. Launching workers. 00:05:44.956 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36913 00:05:44.956 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36974, failed to submit 62 00:05:44.956 success 36917, unsuccessful 57, failed 0 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:44.956 rmmod nvme_tcp 00:05:44.956 rmmod nvme_fabrics 00:05:44.956 rmmod nvme_keyring 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 935628 ']' 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 935628 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 935628 ']' 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 935628 00:05:44.956 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 935628 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 935628' 00:05:44.957 killing process with pid 935628 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 935628 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 935628 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:44.957 09:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:46.862 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:46.862 00:05:46.862 real 0m11.241s 00:05:46.862 user 0m11.695s 00:05:46.862 sys 0m5.453s 00:05:46.862 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.862 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:46.862 ************************************ 00:05:46.862 END TEST nvmf_abort 00:05:46.862 ************************************ 00:05:46.862 09:07:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:46.862 09:07:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:46.862 09:07:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.863 09:07:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.123 ************************************ 00:05:47.123 START TEST nvmf_ns_hotplug_stress 00:05:47.123 ************************************ 00:05:47.123 09:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:47.123 * Looking for test storage... 00:05:47.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.123 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.123 --rc genhtml_branch_coverage=1 00:05:47.124 --rc genhtml_function_coverage=1 00:05:47.124 --rc genhtml_legend=1 00:05:47.124 --rc geninfo_all_blocks=1 00:05:47.124 --rc geninfo_unexecuted_blocks=1 00:05:47.124 00:05:47.124 ' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.124 --rc genhtml_branch_coverage=1 00:05:47.124 --rc genhtml_function_coverage=1 00:05:47.124 --rc genhtml_legend=1 00:05:47.124 --rc geninfo_all_blocks=1 00:05:47.124 --rc geninfo_unexecuted_blocks=1 00:05:47.124 00:05:47.124 ' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.124 --rc genhtml_branch_coverage=1 00:05:47.124 --rc genhtml_function_coverage=1 00:05:47.124 --rc genhtml_legend=1 00:05:47.124 --rc geninfo_all_blocks=1 00:05:47.124 --rc geninfo_unexecuted_blocks=1 00:05:47.124 00:05:47.124 ' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.124 --rc genhtml_branch_coverage=1 00:05:47.124 --rc genhtml_function_coverage=1 00:05:47.124 --rc genhtml_legend=1 00:05:47.124 --rc geninfo_all_blocks=1 00:05:47.124 --rc geninfo_unexecuted_blocks=1 00:05:47.124 00:05:47.124 ' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.124 09:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:53.699 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:53.699 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:53.699 Found net devices under 0000:86:00.0: cvl_0_0 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:53.699 Found net devices under 0000:86:00.1: cvl_0_1 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.699 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.700 09:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:05:53.700 00:05:53.700 --- 10.0.0.2 ping statistics --- 00:05:53.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.700 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:05:53.700 00:05:53.700 --- 10.0.0.1 ping statistics --- 00:05:53.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.700 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=939651 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 939651 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 939651 ']' 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.700 [2024-11-19 09:07:54.217016] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:05:53.700 [2024-11-19 09:07:54.217058] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.700 [2024-11-19 09:07:54.292680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.700 [2024-11-19 09:07:54.332471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.700 [2024-11-19 09:07:54.332508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.700 [2024-11-19 09:07:54.332515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.700 [2024-11-19 09:07:54.332521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.700 [2024-11-19 09:07:54.332526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.700 [2024-11-19 09:07:54.333882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.700 [2024-11-19 09:07:54.333990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.700 [2024-11-19 09:07:54.333991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:53.700 [2024-11-19 09:07:54.654581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.700 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.959 09:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:54.217 [2024-11-19 09:07:55.052065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:54.217 09:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:54.217 09:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:54.475 Malloc0 00:05:54.475 09:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:54.733 Delay0 00:05:54.733 09:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.992 09:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:55.250 NULL1 00:05:55.250 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:55.250 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:55.250 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=940131 00:05:55.250 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:05:55.250 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.508 Read completed with error (sct=0, sc=11) 00:05:55.508 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.767 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:55.767 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:56.026 true 00:05:56.026 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:05:56.026 09:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.963 09:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.963 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.963 09:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:56.964 09:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:57.222 true 00:05:57.222 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:05:57.222 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.481 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.481 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:57.481 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:57.740 true 00:05:57.740 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:05:57.740 09:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.128 09:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.128 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:59.128 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:59.387 true 00:05:59.387 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:05:59.387 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.646 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.904 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:59.904 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:59.904 true 00:05:59.904 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:05:59.904 09:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.282 09:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.282 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:01.282 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:01.282 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:01.540 true 00:06:01.540 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:01.540 09:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.476 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.476 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:02.476 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:02.735 true 00:06:02.735 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:02.735 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.735 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.994 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:02.994 09:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:03.253 true 00:06:03.253 09:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:03.253 09:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.190 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.449 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.449 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:04.449 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:04.708 true 00:06:04.708 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:04.708 09:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.645 09:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.645 09:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:05.645 09:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:05.903 true 00:06:05.903 09:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:05.903 09:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.162 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.421 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:06.421 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:06.421 true 00:06:06.680 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:06.680 09:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.617 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.875 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:07.875 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:07.875 true 00:06:07.875 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:07.875 09:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.134 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.393 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:08.393 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:08.652 true 00:06:08.652 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:08.652 09:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.032 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.032 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:10.032 09:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:10.032 true 00:06:10.291 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:10.291 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.859 09:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:11.119 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:11.119 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:11.378 true 00:06:11.378 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:11.378 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.637 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.896 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:11.896 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:11.896 true 00:06:11.896 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:11.896 09:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.274 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.274 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:13.274 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:13.533 true 00:06:13.533 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:13.533 09:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.470 09:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.470 09:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:14.470 09:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:14.728 true 00:06:14.728 09:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:14.728 09:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.987 09:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.987 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:14.987 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:15.246 true 00:06:15.246 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:15.246 09:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.623 09:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.623 09:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:16.623 09:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:16.623 true 00:06:16.881 09:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:16.881 09:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.450 09:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.708 09:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:17.708 09:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:17.967 true 00:06:17.967 09:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:17.967 09:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.225 09:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.484 09:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:18.484 09:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:18.484 true 00:06:18.484 09:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:18.484 09:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.860 09:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.860 09:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:19.860 09:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:20.118 true 00:06:20.118 09:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:20.118 09:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.053 09:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.053 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:21.053 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:21.312 true 00:06:21.312 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:21.312 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.570 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.829 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:21.829 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:21.829 true 00:06:21.829 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:21.829 09:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 09:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.206 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:23.206 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:23.465 true 00:06:23.465 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:23.465 09:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.403 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.403 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:24.403 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:24.662 true 00:06:24.662 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:24.662 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.920 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.920 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:24.920 09:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:25.178 true 00:06:25.178 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:25.178 09:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.553 Initializing NVMe Controllers 00:06:26.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.553 Controller IO queue size 128, less than required. 00:06:26.553 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.553 Controller IO queue size 128, less than required. 00:06:26.553 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:26.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:26.553 Initialization complete. Launching workers. 00:06:26.553 ======================================================== 00:06:26.553 Latency(us) 00:06:26.553 Device Information : IOPS MiB/s Average min max 00:06:26.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1980.30 0.97 44597.84 1868.66 1027195.05 00:06:26.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16932.53 8.27 7559.01 2347.57 381356.61 00:06:26.553 ======================================================== 00:06:26.553 Total : 18912.83 9.23 11437.22 1868.66 1027195.05 00:06:26.553 00:06:26.553 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.553 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:26.553 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:26.553 true 00:06:26.812 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 940131 00:06:26.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (940131) - No such process 00:06:26.812 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 940131 00:06:26.812 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.812 09:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.070 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:27.070 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:27.070 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:27.070 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.070 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:27.329 null0 00:06:27.329 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.329 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.329 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:27.329 null1 00:06:27.588 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.588 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.588 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:27.588 null2 00:06:27.588 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.588 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.588 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:27.846 null3 00:06:27.846 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.846 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.846 09:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:28.105 null4 00:06:28.105 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.105 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.105 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:28.363 null5 00:06:28.363 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.363 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.363 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:28.363 null6 00:06:28.363 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.363 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.363 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:28.622 null7 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 945745 945747 945748 945750 945752 945754 945756 945758 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:28.622 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:28.882 09:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.140 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.399 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.659 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.660 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.919 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.920 09:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.179 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.438 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.697 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.698 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.957 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.958 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.958 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.958 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.958 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.958 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.958 09:08:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.958 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.217 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.477 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.737 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.996 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.996 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.997 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.997 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.997 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.997 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.997 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.997 09:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.997 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.256 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.516 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.517 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.517 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.517 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.776 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:33.035 rmmod nvme_tcp 00:06:33.035 rmmod nvme_fabrics 00:06:33.035 rmmod nvme_keyring 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 939651 ']' 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 939651 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 939651 ']' 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 939651 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.035 09:08:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 939651 00:06:33.035 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:33.035 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:33.035 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 939651' 00:06:33.035 killing process with pid 939651 00:06:33.035 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 939651 00:06:33.035 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 939651 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.295 09:08:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:35.833 00:06:35.833 real 0m48.334s 00:06:35.833 user 3m16.944s 00:06:35.833 sys 0m15.678s 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.833 ************************************ 00:06:35.833 END TEST nvmf_ns_hotplug_stress 00:06:35.833 ************************************ 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:35.833 ************************************ 00:06:35.833 START TEST nvmf_delete_subsystem 00:06:35.833 ************************************ 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:35.833 * Looking for test storage... 00:06:35.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.833 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.834 --rc genhtml_branch_coverage=1 00:06:35.834 --rc genhtml_function_coverage=1 00:06:35.834 --rc genhtml_legend=1 00:06:35.834 --rc geninfo_all_blocks=1 00:06:35.834 --rc geninfo_unexecuted_blocks=1 00:06:35.834 00:06:35.834 ' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.834 --rc genhtml_branch_coverage=1 00:06:35.834 --rc genhtml_function_coverage=1 00:06:35.834 --rc genhtml_legend=1 00:06:35.834 --rc geninfo_all_blocks=1 00:06:35.834 --rc geninfo_unexecuted_blocks=1 00:06:35.834 00:06:35.834 ' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.834 --rc genhtml_branch_coverage=1 00:06:35.834 --rc genhtml_function_coverage=1 00:06:35.834 --rc genhtml_legend=1 00:06:35.834 --rc geninfo_all_blocks=1 00:06:35.834 --rc geninfo_unexecuted_blocks=1 00:06:35.834 00:06:35.834 ' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.834 --rc genhtml_branch_coverage=1 00:06:35.834 --rc genhtml_function_coverage=1 00:06:35.834 --rc genhtml_legend=1 00:06:35.834 --rc geninfo_all_blocks=1 00:06:35.834 --rc geninfo_unexecuted_blocks=1 00:06:35.834 00:06:35.834 ' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:35.834 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:35.835 09:08:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:42.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:42.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:42.407 Found net devices under 0000:86:00.0: cvl_0_0 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.407 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:42.408 Found net devices under 0000:86:00.1: cvl_0_1 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:06:42.408 00:06:42.408 --- 10.0.0.2 ping statistics --- 00:06:42.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.408 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:06:42.408 00:06:42.408 --- 10.0.0.1 ping statistics --- 00:06:42.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.408 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=950142 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 950142 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 950142 ']' 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:42.408 09:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.408 [2024-11-19 09:08:42.652269] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:06:42.408 [2024-11-19 09:08:42.652313] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.408 [2024-11-19 09:08:42.733243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.408 [2024-11-19 09:08:42.778299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.408 [2024-11-19 09:08:42.778336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.408 [2024-11-19 09:08:42.778344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.408 [2024-11-19 09:08:42.778350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.408 [2024-11-19 09:08:42.778355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.408 [2024-11-19 09:08:42.779546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.408 [2024-11-19 09:08:42.779548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.667 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:42.667 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:42.667 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:42.667 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.667 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.667 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.668 [2024-11-19 09:08:43.533423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.668 [2024-11-19 09:08:43.553628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.668 NULL1 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.668 Delay0 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=950384 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:42.668 09:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:42.668 [2024-11-19 09:08:43.665428] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:44.572 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:44.572 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.572 09:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 starting I/O failed: -6 00:06:44.832 [2024-11-19 09:08:45.833475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12822c0 is same with the state(6) to be set 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Write completed with error (sct=0, sc=8) 00:06:44.832 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 starting I/O failed: -6 00:06:44.833 Read completed with error (sct=0, sc=8) 00:06:44.833 Write completed with error (sct=0, sc=8) 00:06:44.833 [2024-11-19 09:08:45.834159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52f000d4d0 is same with the state(6) to be set 00:06:45.769 [2024-11-19 09:08:46.802859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12839a0 is same with the state(6) to be set 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 [2024-11-19 09:08:46.836897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52f000d800 is same with the state(6) to be set 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Write completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.029 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 [2024-11-19 09:08:46.837049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52f0000c40 is same with the state(6) to be set 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 [2024-11-19 09:08:46.837176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f52f000d020 is same with the state(6) to be set 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Write completed with error (sct=0, sc=8) 00:06:46.030 Read completed with error (sct=0, sc=8) 00:06:46.030 [2024-11-19 09:08:46.837742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12824a0 is same with the state(6) to be set 00:06:46.030 Initializing NVMe Controllers 00:06:46.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.030 Controller IO queue size 128, less than required. 00:06:46.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:46.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:46.030 Initialization complete. Launching workers. 00:06:46.030 ======================================================== 00:06:46.030 Latency(us) 00:06:46.030 Device Information : IOPS MiB/s Average min max 00:06:46.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 156.57 0.08 871145.65 327.81 1009007.36 00:06:46.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.99 0.08 1027389.07 634.68 2002078.75 00:06:46.030 ======================================================== 00:06:46.030 Total : 326.55 0.16 952477.84 327.81 2002078.75 00:06:46.030 00:06:46.030 [2024-11-19 09:08:46.838451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12839a0 (9): Bad file descriptor 00:06:46.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:46.030 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.030 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:46.030 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 950384 00:06:46.030 09:08:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:46.289 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 950384 00:06:46.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (950384) - No such process 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 950384 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 950384 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 950384 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.548 [2024-11-19 09:08:47.371011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=951077 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:46.548 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:46.548 [2024-11-19 09:08:47.467114] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:47.119 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.119 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:47.119 09:08:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.378 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.378 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:47.378 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:47.946 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:47.946 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:47.946 09:08:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.514 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.514 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:48.514 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.081 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.081 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:49.081 09:08:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.649 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.649 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:49.649 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.649 Initializing NVMe Controllers 00:06:49.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:49.649 Controller IO queue size 128, less than required. 00:06:49.649 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:49.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:49.649 Initialization complete. Launching workers. 00:06:49.649 ======================================================== 00:06:49.649 Latency(us) 00:06:49.649 Device Information : IOPS MiB/s Average min max 00:06:49.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002171.79 1000126.95 1041047.20 00:06:49.649 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004439.21 1000160.65 1043728.05 00:06:49.649 ======================================================== 00:06:49.649 Total : 256.00 0.12 1003305.50 1000126.95 1043728.05 00:06:49.649 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 951077 00:06:49.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (951077) - No such process 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 951077 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:49.908 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:49.908 rmmod nvme_tcp 00:06:49.909 rmmod nvme_fabrics 00:06:49.909 rmmod nvme_keyring 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 950142 ']' 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 950142 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 950142 ']' 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 950142 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.168 09:08:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 950142 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 950142' 00:06:50.168 killing process with pid 950142 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 950142 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 950142 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.168 09:08:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:52.708 00:06:52.708 real 0m16.917s 00:06:52.708 user 0m30.859s 00:06:52.708 sys 0m5.586s 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.708 ************************************ 00:06:52.708 END TEST nvmf_delete_subsystem 00:06:52.708 ************************************ 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.708 ************************************ 00:06:52.708 START TEST nvmf_host_management 00:06:52.708 ************************************ 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:52.708 * Looking for test storage... 00:06:52.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:52.708 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.709 --rc genhtml_branch_coverage=1 00:06:52.709 --rc genhtml_function_coverage=1 00:06:52.709 --rc genhtml_legend=1 00:06:52.709 --rc geninfo_all_blocks=1 00:06:52.709 --rc geninfo_unexecuted_blocks=1 00:06:52.709 00:06:52.709 ' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.709 --rc genhtml_branch_coverage=1 00:06:52.709 --rc genhtml_function_coverage=1 00:06:52.709 --rc genhtml_legend=1 00:06:52.709 --rc geninfo_all_blocks=1 00:06:52.709 --rc geninfo_unexecuted_blocks=1 00:06:52.709 00:06:52.709 ' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.709 --rc genhtml_branch_coverage=1 00:06:52.709 --rc genhtml_function_coverage=1 00:06:52.709 --rc genhtml_legend=1 00:06:52.709 --rc geninfo_all_blocks=1 00:06:52.709 --rc geninfo_unexecuted_blocks=1 00:06:52.709 00:06:52.709 ' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.709 --rc genhtml_branch_coverage=1 00:06:52.709 --rc genhtml_function_coverage=1 00:06:52.709 --rc genhtml_legend=1 00:06:52.709 --rc geninfo_all_blocks=1 00:06:52.709 --rc geninfo_unexecuted_blocks=1 00:06:52.709 00:06:52.709 ' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.709 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.710 09:08:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:59.282 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:59.283 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:59.283 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:59.283 Found net devices under 0000:86:00.0: cvl_0_0 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:59.283 Found net devices under 0000:86:00.1: cvl_0_1 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:59.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:06:59.283 00:06:59.283 --- 10.0.0.2 ping statistics --- 00:06:59.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.283 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:59.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:06:59.283 00:06:59.283 --- 10.0.0.1 ping statistics --- 00:06:59.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.283 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=955159 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 955159 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 955159 ']' 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.283 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 [2024-11-19 09:08:59.595171] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:06:59.284 [2024-11-19 09:08:59.595220] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.284 [2024-11-19 09:08:59.674636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.284 [2024-11-19 09:08:59.718920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.284 [2024-11-19 09:08:59.718964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.284 [2024-11-19 09:08:59.718971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.284 [2024-11-19 09:08:59.718977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.284 [2024-11-19 09:08:59.718982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.284 [2024-11-19 09:08:59.720587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.284 [2024-11-19 09:08:59.720696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.284 [2024-11-19 09:08:59.720801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.284 [2024-11-19 09:08:59.720802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 [2024-11-19 09:08:59.858091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 Malloc0 00:06:59.284 [2024-11-19 09:08:59.927632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=955359 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 955359 /var/tmp/bdevperf.sock 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 955359 ']' 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:59.284 { 00:06:59.284 "params": { 00:06:59.284 "name": "Nvme$subsystem", 00:06:59.284 "trtype": "$TEST_TRANSPORT", 00:06:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:59.284 "adrfam": "ipv4", 00:06:59.284 "trsvcid": "$NVMF_PORT", 00:06:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:59.284 "hdgst": ${hdgst:-false}, 00:06:59.284 "ddgst": ${ddgst:-false} 00:06:59.284 }, 00:06:59.284 "method": "bdev_nvme_attach_controller" 00:06:59.284 } 00:06:59.284 EOF 00:06:59.284 )") 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:59.284 09:08:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:59.284 "params": { 00:06:59.284 "name": "Nvme0", 00:06:59.284 "trtype": "tcp", 00:06:59.284 "traddr": "10.0.0.2", 00:06:59.284 "adrfam": "ipv4", 00:06:59.284 "trsvcid": "4420", 00:06:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:59.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:59.284 "hdgst": false, 00:06:59.284 "ddgst": false 00:06:59.284 }, 00:06:59.284 "method": "bdev_nvme_attach_controller" 00:06:59.284 }' 00:06:59.284 [2024-11-19 09:09:00.025536] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:06:59.284 [2024-11-19 09:09:00.025584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955359 ] 00:06:59.284 [2024-11-19 09:09:00.102771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.284 [2024-11-19 09:09:00.144325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.543 Running I/O for 10 seconds... 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=93 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 93 -ge 100 ']' 00:06:59.543 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.802 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=677 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 677 -ge 100 ']' 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.063 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.063 [2024-11-19 09:09:00.875007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.064 [2024-11-19 09:09:00.875548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.064 [2024-11-19 09:09:00.875556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.875991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:00.065 [2024-11-19 09:09:00.875997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.065 [2024-11-19 09:09:00.876987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:00.065 task offset: 102272 on job bdev=Nvme0n1 fails 00:07:00.065 00:07:00.065 Latency(us) 00:07:00.065 [2024-11-19T08:09:01.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.065 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:00.065 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:00.065 Verification LBA range: start 0x0 length 0x400 00:07:00.065 Nvme0n1 : 0.41 1880.27 117.52 156.69 0.00 30575.25 1488.81 27696.08 00:07:00.065 [2024-11-19T08:09:01.124Z] =================================================================================================================== 00:07:00.065 [2024-11-19T08:09:01.124Z] Total : 1880.27 117.52 156.69 0.00 30575.25 1488.81 27696.08 00:07:00.065 [2024-11-19 09:09:00.879424] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.065 [2024-11-19 09:09:00.879449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250b500 (9): Bad file descriptor 00:07:00.065 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.065 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:00.066 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.066 [2024-11-19 09:09:00.880411] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:00.066 [2024-11-19 09:09:00.880483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:00.066 [2024-11-19 09:09:00.880504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:00.066 [2024-11-19 09:09:00.880519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:00.066 [2024-11-19 09:09:00.880527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:00.066 [2024-11-19 09:09:00.880533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:00.066 [2024-11-19 09:09:00.880540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x250b500 00:07:00.066 [2024-11-19 09:09:00.880558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250b500 (9): Bad file descriptor 00:07:00.066 [2024-11-19 09:09:00.880570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:00.066 [2024-11-19 09:09:00.880577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:00.066 [2024-11-19 09:09:00.880586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:00.066 [2024-11-19 09:09:00.880595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:00.066 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.066 09:09:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 955359 00:07:01.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (955359) - No such process 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:01.004 { 00:07:01.004 "params": { 00:07:01.004 "name": "Nvme$subsystem", 00:07:01.004 "trtype": "$TEST_TRANSPORT", 00:07:01.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:01.004 "adrfam": "ipv4", 00:07:01.004 "trsvcid": "$NVMF_PORT", 00:07:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:01.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:01.004 "hdgst": ${hdgst:-false}, 00:07:01.004 "ddgst": ${ddgst:-false} 00:07:01.004 }, 00:07:01.004 "method": "bdev_nvme_attach_controller" 00:07:01.004 } 00:07:01.004 EOF 00:07:01.004 )") 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:01.004 09:09:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:01.004 "params": { 00:07:01.004 "name": "Nvme0", 00:07:01.004 "trtype": "tcp", 00:07:01.004 "traddr": "10.0.0.2", 00:07:01.004 "adrfam": "ipv4", 00:07:01.004 "trsvcid": "4420", 00:07:01.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:01.004 "hdgst": false, 00:07:01.004 "ddgst": false 00:07:01.004 }, 00:07:01.004 "method": "bdev_nvme_attach_controller" 00:07:01.004 }' 00:07:01.004 [2024-11-19 09:09:01.947213] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:07:01.004 [2024-11-19 09:09:01.947262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955738 ] 00:07:01.004 [2024-11-19 09:09:02.023661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.263 [2024-11-19 09:09:02.063967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.263 Running I/O for 1 seconds... 00:07:02.644 1957.00 IOPS, 122.31 MiB/s 00:07:02.644 Latency(us) 00:07:02.644 [2024-11-19T08:09:03.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.644 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.645 Verification LBA range: start 0x0 length 0x400 00:07:02.645 Nvme0n1 : 1.01 1997.58 124.85 0.00 0.00 31422.70 2450.48 27924.03 00:07:02.645 [2024-11-19T08:09:03.704Z] =================================================================================================================== 00:07:02.645 [2024-11-19T08:09:03.704Z] Total : 1997.58 124.85 0.00 0.00 31422.70 2450.48 27924.03 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:02.645 rmmod nvme_tcp 00:07:02.645 rmmod nvme_fabrics 00:07:02.645 rmmod nvme_keyring 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 955159 ']' 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 955159 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 955159 ']' 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 955159 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 955159 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 955159' 00:07:02.645 killing process with pid 955159 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 955159 00:07:02.645 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 955159 00:07:02.905 [2024-11-19 09:09:03.725262] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:02.905 09:09:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:04.816 00:07:04.816 real 0m12.481s 00:07:04.816 user 0m20.073s 00:07:04.816 sys 0m5.613s 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.816 ************************************ 00:07:04.816 END TEST nvmf_host_management 00:07:04.816 ************************************ 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.816 09:09:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.077 ************************************ 00:07:05.077 START TEST nvmf_lvol 00:07:05.077 ************************************ 00:07:05.077 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:05.077 * Looking for test storage... 00:07:05.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.077 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:05.077 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:05.077 09:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:05.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.077 --rc genhtml_branch_coverage=1 00:07:05.077 --rc genhtml_function_coverage=1 00:07:05.077 --rc genhtml_legend=1 00:07:05.077 --rc geninfo_all_blocks=1 00:07:05.077 --rc geninfo_unexecuted_blocks=1 00:07:05.077 00:07:05.077 ' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:05.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.077 --rc genhtml_branch_coverage=1 00:07:05.077 --rc genhtml_function_coverage=1 00:07:05.077 --rc genhtml_legend=1 00:07:05.077 --rc geninfo_all_blocks=1 00:07:05.077 --rc geninfo_unexecuted_blocks=1 00:07:05.077 00:07:05.077 ' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:05.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.077 --rc genhtml_branch_coverage=1 00:07:05.077 --rc genhtml_function_coverage=1 00:07:05.077 --rc genhtml_legend=1 00:07:05.077 --rc geninfo_all_blocks=1 00:07:05.077 --rc geninfo_unexecuted_blocks=1 00:07:05.077 00:07:05.077 ' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:05.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.077 --rc genhtml_branch_coverage=1 00:07:05.077 --rc genhtml_function_coverage=1 00:07:05.077 --rc genhtml_legend=1 00:07:05.077 --rc geninfo_all_blocks=1 00:07:05.077 --rc geninfo_unexecuted_blocks=1 00:07:05.077 00:07:05.077 ' 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:05.077 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.078 09:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:11.654 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:11.654 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:11.654 Found net devices under 0000:86:00.0: cvl_0_0 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:11.654 Found net devices under 0000:86:00.1: cvl_0_1 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:11.654 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.655 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.655 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.655 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.655 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:11.655 09:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:07:11.655 00:07:11.655 --- 10.0.0.2 ping statistics --- 00:07:11.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.655 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:07:11.655 00:07:11.655 --- 10.0.0.1 ping statistics --- 00:07:11.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.655 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=959947 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 959947 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 959947 ']' 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 [2024-11-19 09:09:12.158284] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:07:11.655 [2024-11-19 09:09:12.158333] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.655 [2024-11-19 09:09:12.238832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.655 [2024-11-19 09:09:12.280920] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.655 [2024-11-19 09:09:12.280961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.655 [2024-11-19 09:09:12.280969] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.655 [2024-11-19 09:09:12.280976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.655 [2024-11-19 09:09:12.280981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.655 [2024-11-19 09:09:12.282383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.655 [2024-11-19 09:09:12.282497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.655 [2024-11-19 09:09:12.282497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:11.655 [2024-11-19 09:09:12.588039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.655 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:11.914 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:11.914 09:09:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:12.172 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:12.172 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:12.431 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:12.431 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6797159d-e004-44ef-b2bb-2176d99aa4c2 00:07:12.431 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6797159d-e004-44ef-b2bb-2176d99aa4c2 lvol 20 00:07:12.689 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=29cdbddf-34fd-4091-be16-1ee9e784bce2 00:07:12.689 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:12.948 09:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29cdbddf-34fd-4091-be16-1ee9e784bce2 00:07:13.206 09:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.465 [2024-11-19 09:09:14.270361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.465 09:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.465 09:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=960402 00:07:13.465 09:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:13.465 09:09:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:14.842 09:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 29cdbddf-34fd-4091-be16-1ee9e784bce2 MY_SNAPSHOT 00:07:14.842 09:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7b507411-e8e5-459e-86cc-dd3d95223f21 00:07:14.842 09:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 29cdbddf-34fd-4091-be16-1ee9e784bce2 30 00:07:15.101 09:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7b507411-e8e5-459e-86cc-dd3d95223f21 MY_CLONE 00:07:15.359 09:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ed9f7084-efc1-4213-89d6-98d8ac451269 00:07:15.359 09:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ed9f7084-efc1-4213-89d6-98d8ac451269 00:07:15.927 09:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 960402 00:07:24.045 Initializing NVMe Controllers 00:07:24.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:24.045 Controller IO queue size 128, less than required. 00:07:24.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:24.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:24.046 Initialization complete. Launching workers. 00:07:24.046 ======================================================== 00:07:24.046 Latency(us) 00:07:24.046 Device Information : IOPS MiB/s Average min max 00:07:24.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11784.50 46.03 10866.55 1635.65 49196.27 00:07:24.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11861.30 46.33 10791.17 3410.87 57126.47 00:07:24.046 ======================================================== 00:07:24.046 Total : 23645.80 92.37 10828.73 1635.65 57126.47 00:07:24.046 00:07:24.046 09:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:24.305 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 29cdbddf-34fd-4091-be16-1ee9e784bce2 00:07:24.305 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6797159d-e004-44ef-b2bb-2176d99aa4c2 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.565 rmmod nvme_tcp 00:07:24.565 rmmod nvme_fabrics 00:07:24.565 rmmod nvme_keyring 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 959947 ']' 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 959947 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 959947 ']' 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 959947 00:07:24.565 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:24.824 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:24.824 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 959947 00:07:24.824 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:24.824 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 959947' 00:07:24.825 killing process with pid 959947 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 959947 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 959947 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.825 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.084 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.084 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.084 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.084 09:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.990 00:07:26.990 real 0m22.046s 00:07:26.990 user 1m3.292s 00:07:26.990 sys 0m7.865s 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 ************************************ 00:07:26.990 END TEST nvmf_lvol 00:07:26.990 ************************************ 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.990 09:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.990 ************************************ 00:07:26.990 START TEST nvmf_lvs_grow 00:07:26.990 ************************************ 00:07:26.990 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:27.253 * Looking for test storage... 00:07:27.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.253 --rc genhtml_branch_coverage=1 00:07:27.253 --rc genhtml_function_coverage=1 00:07:27.253 --rc genhtml_legend=1 00:07:27.253 --rc geninfo_all_blocks=1 00:07:27.253 --rc geninfo_unexecuted_blocks=1 00:07:27.253 00:07:27.253 ' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.253 --rc genhtml_branch_coverage=1 00:07:27.253 --rc genhtml_function_coverage=1 00:07:27.253 --rc genhtml_legend=1 00:07:27.253 --rc geninfo_all_blocks=1 00:07:27.253 --rc geninfo_unexecuted_blocks=1 00:07:27.253 00:07:27.253 ' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.253 --rc genhtml_branch_coverage=1 00:07:27.253 --rc genhtml_function_coverage=1 00:07:27.253 --rc genhtml_legend=1 00:07:27.253 --rc geninfo_all_blocks=1 00:07:27.253 --rc geninfo_unexecuted_blocks=1 00:07:27.253 00:07:27.253 ' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:27.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.253 --rc genhtml_branch_coverage=1 00:07:27.253 --rc genhtml_function_coverage=1 00:07:27.253 --rc genhtml_legend=1 00:07:27.253 --rc geninfo_all_blocks=1 00:07:27.253 --rc geninfo_unexecuted_blocks=1 00:07:27.253 00:07:27.253 ' 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.253 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.254 09:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.827 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:33.828 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:33.828 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:33.828 Found net devices under 0000:86:00.0: cvl_0_0 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:33.828 Found net devices under 0000:86:00.1: cvl_0_1 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.828 09:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:07:33.828 00:07:33.828 --- 10.0.0.2 ping statistics --- 00:07:33.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.828 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:33.828 00:07:33.828 --- 10.0.0.1 ping statistics --- 00:07:33.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.828 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=965788 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 965788 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 965788 ']' 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.828 [2024-11-19 09:09:34.252994] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:07:33.828 [2024-11-19 09:09:34.253041] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.828 [2024-11-19 09:09:34.334472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.828 [2024-11-19 09:09:34.375360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.828 [2024-11-19 09:09:34.375394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.828 [2024-11-19 09:09:34.375402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.828 [2024-11-19 09:09:34.375411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.828 [2024-11-19 09:09:34.375416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.828 [2024-11-19 09:09:34.375979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.828 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.829 [2024-11-19 09:09:34.682914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.829 ************************************ 00:07:33.829 START TEST lvs_grow_clean 00:07:33.829 ************************************ 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:33.829 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.087 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:34.087 09:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:34.087 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:34.087 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:34.087 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:34.346 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.346 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.346 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 lvol 150 00:07:34.604 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=802e1c27-a1d3-47ff-9707-274f6b5bd503 00:07:34.604 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:34.604 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:34.864 [2024-11-19 09:09:35.712920] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:34.864 [2024-11-19 09:09:35.712976] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:34.864 true 00:07:34.864 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:34.864 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:34.864 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:34.864 09:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.123 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 802e1c27-a1d3-47ff-9707-274f6b5bd503 00:07:35.381 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.640 [2024-11-19 09:09:36.467173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.640 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.640 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=966283 00:07:35.640 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:35.640 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 966283 /var/tmp/bdevperf.sock 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 966283 ']' 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.641 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:35.899 [2024-11-19 09:09:36.722365] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:07:35.899 [2024-11-19 09:09:36.722412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid966283 ] 00:07:35.899 [2024-11-19 09:09:36.799198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.899 [2024-11-19 09:09:36.839975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.899 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.899 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:35.899 09:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:36.465 Nvme0n1 00:07:36.465 09:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:36.465 [ 00:07:36.465 { 00:07:36.465 "name": "Nvme0n1", 00:07:36.465 "aliases": [ 00:07:36.465 "802e1c27-a1d3-47ff-9707-274f6b5bd503" 00:07:36.465 ], 00:07:36.465 "product_name": "NVMe disk", 00:07:36.465 "block_size": 4096, 00:07:36.465 "num_blocks": 38912, 00:07:36.465 "uuid": "802e1c27-a1d3-47ff-9707-274f6b5bd503", 00:07:36.465 "numa_id": 1, 00:07:36.465 "assigned_rate_limits": { 00:07:36.465 "rw_ios_per_sec": 0, 00:07:36.466 "rw_mbytes_per_sec": 0, 00:07:36.466 "r_mbytes_per_sec": 0, 00:07:36.466 "w_mbytes_per_sec": 0 00:07:36.466 }, 00:07:36.466 "claimed": false, 00:07:36.466 "zoned": false, 00:07:36.466 "supported_io_types": { 00:07:36.466 "read": true, 00:07:36.466 "write": true, 00:07:36.466 "unmap": true, 00:07:36.466 "flush": true, 00:07:36.466 "reset": true, 00:07:36.466 "nvme_admin": true, 00:07:36.466 "nvme_io": true, 00:07:36.466 "nvme_io_md": false, 00:07:36.466 "write_zeroes": true, 00:07:36.466 "zcopy": false, 00:07:36.466 "get_zone_info": false, 00:07:36.466 "zone_management": false, 00:07:36.466 "zone_append": false, 00:07:36.466 "compare": true, 00:07:36.466 "compare_and_write": true, 00:07:36.466 "abort": true, 00:07:36.466 "seek_hole": false, 00:07:36.466 "seek_data": false, 00:07:36.466 "copy": true, 00:07:36.466 "nvme_iov_md": false 00:07:36.466 }, 00:07:36.466 "memory_domains": [ 00:07:36.466 { 00:07:36.466 "dma_device_id": "system", 00:07:36.466 "dma_device_type": 1 00:07:36.466 } 00:07:36.466 ], 00:07:36.466 "driver_specific": { 00:07:36.466 "nvme": [ 00:07:36.466 { 00:07:36.466 "trid": { 00:07:36.466 "trtype": "TCP", 00:07:36.466 "adrfam": "IPv4", 00:07:36.466 "traddr": "10.0.0.2", 00:07:36.466 "trsvcid": "4420", 00:07:36.466 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:36.466 }, 00:07:36.466 "ctrlr_data": { 00:07:36.466 "cntlid": 1, 00:07:36.466 "vendor_id": "0x8086", 00:07:36.466 "model_number": "SPDK bdev Controller", 00:07:36.466 "serial_number": "SPDK0", 00:07:36.466 "firmware_revision": "25.01", 00:07:36.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:36.466 "oacs": { 00:07:36.466 "security": 0, 00:07:36.466 "format": 0, 00:07:36.466 "firmware": 0, 00:07:36.466 "ns_manage": 0 00:07:36.466 }, 00:07:36.466 "multi_ctrlr": true, 00:07:36.466 "ana_reporting": false 00:07:36.466 }, 00:07:36.466 "vs": { 00:07:36.466 "nvme_version": "1.3" 00:07:36.466 }, 00:07:36.466 "ns_data": { 00:07:36.466 "id": 1, 00:07:36.466 "can_share": true 00:07:36.466 } 00:07:36.466 } 00:07:36.466 ], 00:07:36.466 "mp_policy": "active_passive" 00:07:36.466 } 00:07:36.466 } 00:07:36.466 ] 00:07:36.466 09:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=966510 00:07:36.466 09:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:36.466 09:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.725 Running I/O for 10 seconds... 00:07:37.660 Latency(us) 00:07:37.660 [2024-11-19T08:09:38.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.660 Nvme0n1 : 1.00 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:37.660 [2024-11-19T08:09:38.719Z] =================================================================================================================== 00:07:37.660 [2024-11-19T08:09:38.719Z] Total : 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:37.660 00:07:38.596 09:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:38.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.596 Nvme0n1 : 2.00 22833.00 89.19 0.00 0.00 0.00 0.00 0.00 00:07:38.596 [2024-11-19T08:09:39.655Z] =================================================================================================================== 00:07:38.596 [2024-11-19T08:09:39.655Z] Total : 22833.00 89.19 0.00 0.00 0.00 0.00 0.00 00:07:38.596 00:07:38.854 true 00:07:38.854 09:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:38.854 09:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:39.113 09:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:39.113 09:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:39.113 09:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 966510 00:07:39.680 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.680 Nvme0n1 : 3.00 22695.67 88.65 0.00 0.00 0.00 0.00 0.00 00:07:39.680 [2024-11-19T08:09:40.739Z] =================================================================================================================== 00:07:39.680 [2024-11-19T08:09:40.739Z] Total : 22695.67 88.65 0.00 0.00 0.00 0.00 0.00 00:07:39.680 00:07:40.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.615 Nvme0n1 : 4.00 22780.25 88.99 0.00 0.00 0.00 0.00 0.00 00:07:40.615 [2024-11-19T08:09:41.674Z] =================================================================================================================== 00:07:40.615 [2024-11-19T08:09:41.674Z] Total : 22780.25 88.99 0.00 0.00 0.00 0.00 0.00 00:07:40.615 00:07:41.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.991 Nvme0n1 : 5.00 22860.60 89.30 0.00 0.00 0.00 0.00 0.00 00:07:41.991 [2024-11-19T08:09:43.050Z] =================================================================================================================== 00:07:41.991 [2024-11-19T08:09:43.050Z] Total : 22860.60 89.30 0.00 0.00 0.00 0.00 0.00 00:07:41.991 00:07:42.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.580 Nvme0n1 : 6.00 22912.50 89.50 0.00 0.00 0.00 0.00 0.00 00:07:42.580 [2024-11-19T08:09:43.639Z] =================================================================================================================== 00:07:42.580 [2024-11-19T08:09:43.639Z] Total : 22912.50 89.50 0.00 0.00 0.00 0.00 0.00 00:07:42.580 00:07:43.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.954 Nvme0n1 : 7.00 22947.00 89.64 0.00 0.00 0.00 0.00 0.00 00:07:43.954 [2024-11-19T08:09:45.013Z] =================================================================================================================== 00:07:43.954 [2024-11-19T08:09:45.013Z] Total : 22947.00 89.64 0.00 0.00 0.00 0.00 0.00 00:07:43.954 00:07:44.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.890 Nvme0n1 : 8.00 22976.38 89.75 0.00 0.00 0.00 0.00 0.00 00:07:44.890 [2024-11-19T08:09:45.949Z] =================================================================================================================== 00:07:44.890 [2024-11-19T08:09:45.949Z] Total : 22976.38 89.75 0.00 0.00 0.00 0.00 0.00 00:07:44.890 00:07:45.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.825 Nvme0n1 : 9.00 23010.67 89.89 0.00 0.00 0.00 0.00 0.00 00:07:45.825 [2024-11-19T08:09:46.884Z] =================================================================================================================== 00:07:45.825 [2024-11-19T08:09:46.884Z] Total : 23010.67 89.89 0.00 0.00 0.00 0.00 0.00 00:07:45.825 00:07:46.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.761 Nvme0n1 : 10.00 23028.90 89.96 0.00 0.00 0.00 0.00 0.00 00:07:46.761 [2024-11-19T08:09:47.820Z] =================================================================================================================== 00:07:46.761 [2024-11-19T08:09:47.820Z] Total : 23028.90 89.96 0.00 0.00 0.00 0.00 0.00 00:07:46.761 00:07:46.761 00:07:46.761 Latency(us) 00:07:46.761 [2024-11-19T08:09:47.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.761 Nvme0n1 : 10.00 23031.84 89.97 0.00 0.00 5554.63 1923.34 12195.39 00:07:46.761 [2024-11-19T08:09:47.820Z] =================================================================================================================== 00:07:46.761 [2024-11-19T08:09:47.820Z] Total : 23031.84 89.97 0.00 0.00 5554.63 1923.34 12195.39 00:07:46.761 { 00:07:46.761 "results": [ 00:07:46.761 { 00:07:46.761 "job": "Nvme0n1", 00:07:46.761 "core_mask": "0x2", 00:07:46.761 "workload": "randwrite", 00:07:46.761 "status": "finished", 00:07:46.761 "queue_depth": 128, 00:07:46.761 "io_size": 4096, 00:07:46.761 "runtime": 10.004283, 00:07:46.761 "iops": 23031.835464870397, 00:07:46.761 "mibps": 89.96810728464999, 00:07:46.761 "io_failed": 0, 00:07:46.761 "io_timeout": 0, 00:07:46.761 "avg_latency_us": 5554.632255417447, 00:07:46.761 "min_latency_us": 1923.3391304347826, 00:07:46.761 "max_latency_us": 12195.394782608695 00:07:46.761 } 00:07:46.761 ], 00:07:46.761 "core_count": 1 00:07:46.761 } 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 966283 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 966283 ']' 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 966283 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 966283 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 966283' 00:07:46.761 killing process with pid 966283 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 966283 00:07:46.761 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.761 00:07:46.761 Latency(us) 00:07:46.761 [2024-11-19T08:09:47.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.761 [2024-11-19T08:09:47.820Z] =================================================================================================================== 00:07:46.761 [2024-11-19T08:09:47.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.761 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 966283 00:07:47.019 09:09:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.020 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:47.291 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:47.291 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:47.567 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:47.567 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:47.567 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.838 [2024-11-19 09:09:48.671646] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:47.838 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:48.122 request: 00:07:48.122 { 00:07:48.122 "uuid": "2c7c1f41-26f9-41fe-9348-fb907df2e729", 00:07:48.122 "method": "bdev_lvol_get_lvstores", 00:07:48.122 "req_id": 1 00:07:48.122 } 00:07:48.122 Got JSON-RPC error response 00:07:48.122 response: 00:07:48.122 { 00:07:48.122 "code": -19, 00:07:48.122 "message": "No such device" 00:07:48.122 } 00:07:48.122 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:48.122 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.122 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.122 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.122 09:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:48.122 aio_bdev 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 802e1c27-a1d3-47ff-9707-274f6b5bd503 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=802e1c27-a1d3-47ff-9707-274f6b5bd503 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:48.122 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:48.396 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 802e1c27-a1d3-47ff-9707-274f6b5bd503 -t 2000 00:07:48.694 [ 00:07:48.694 { 00:07:48.694 "name": "802e1c27-a1d3-47ff-9707-274f6b5bd503", 00:07:48.694 "aliases": [ 00:07:48.694 "lvs/lvol" 00:07:48.694 ], 00:07:48.694 "product_name": "Logical Volume", 00:07:48.694 "block_size": 4096, 00:07:48.694 "num_blocks": 38912, 00:07:48.694 "uuid": "802e1c27-a1d3-47ff-9707-274f6b5bd503", 00:07:48.694 "assigned_rate_limits": { 00:07:48.694 "rw_ios_per_sec": 0, 00:07:48.694 "rw_mbytes_per_sec": 0, 00:07:48.694 "r_mbytes_per_sec": 0, 00:07:48.694 "w_mbytes_per_sec": 0 00:07:48.694 }, 00:07:48.694 "claimed": false, 00:07:48.694 "zoned": false, 00:07:48.694 "supported_io_types": { 00:07:48.694 "read": true, 00:07:48.694 "write": true, 00:07:48.694 "unmap": true, 00:07:48.694 "flush": false, 00:07:48.694 "reset": true, 00:07:48.694 "nvme_admin": false, 00:07:48.694 "nvme_io": false, 00:07:48.694 "nvme_io_md": false, 00:07:48.694 "write_zeroes": true, 00:07:48.694 "zcopy": false, 00:07:48.694 "get_zone_info": false, 00:07:48.694 "zone_management": false, 00:07:48.694 "zone_append": false, 00:07:48.694 "compare": false, 00:07:48.694 "compare_and_write": false, 00:07:48.694 "abort": false, 00:07:48.694 "seek_hole": true, 00:07:48.694 "seek_data": true, 00:07:48.694 "copy": false, 00:07:48.694 "nvme_iov_md": false 00:07:48.694 }, 00:07:48.694 "driver_specific": { 00:07:48.694 "lvol": { 00:07:48.694 "lvol_store_uuid": "2c7c1f41-26f9-41fe-9348-fb907df2e729", 00:07:48.694 "base_bdev": "aio_bdev", 00:07:48.694 "thin_provision": false, 00:07:48.694 "num_allocated_clusters": 38, 00:07:48.694 "snapshot": false, 00:07:48.694 "clone": false, 00:07:48.694 "esnap_clone": false 00:07:48.694 } 00:07:48.694 } 00:07:48.694 } 00:07:48.694 ] 00:07:48.694 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:48.694 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:48.694 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:48.694 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:48.694 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:48.694 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:49.014 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:49.014 09:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 802e1c27-a1d3-47ff-9707-274f6b5bd503 00:07:49.014 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c7c1f41-26f9-41fe-9348-fb907df2e729 00:07:49.340 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.600 00:07:49.600 real 0m15.754s 00:07:49.600 user 0m15.277s 00:07:49.600 sys 0m1.534s 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:49.600 ************************************ 00:07:49.600 END TEST lvs_grow_clean 00:07:49.600 ************************************ 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.600 ************************************ 00:07:49.600 START TEST lvs_grow_dirty 00:07:49.600 ************************************ 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.600 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.858 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.858 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:50.117 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3621326d-4326-4eb0-b124-94019189aff0 00:07:50.117 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:07:50.117 09:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:50.376 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:50.376 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:50.376 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3621326d-4326-4eb0-b124-94019189aff0 lvol 150 00:07:50.376 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:07:50.376 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:50.376 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.635 [2024-11-19 09:09:51.544962] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.635 [2024-11-19 09:09:51.545013] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.635 true 00:07:50.635 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:07:50.636 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:50.895 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:50.895 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:50.895 09:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:07:51.154 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:51.412 [2024-11-19 09:09:52.311208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.412 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=969029 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 969029 /var/tmp/bdevperf.sock 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 969029 ']' 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.671 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:51.671 [2024-11-19 09:09:52.564856] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:07:51.671 [2024-11-19 09:09:52.564904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969029 ] 00:07:51.671 [2024-11-19 09:09:52.639276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.671 [2024-11-19 09:09:52.682087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.930 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.930 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:51.930 09:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:52.188 Nvme0n1 00:07:52.188 09:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:52.447 [ 00:07:52.447 { 00:07:52.447 "name": "Nvme0n1", 00:07:52.447 "aliases": [ 00:07:52.447 "da4f3849-51c0-4aca-be17-a4c4d3b6305a" 00:07:52.447 ], 00:07:52.447 "product_name": "NVMe disk", 00:07:52.447 "block_size": 4096, 00:07:52.447 "num_blocks": 38912, 00:07:52.447 "uuid": "da4f3849-51c0-4aca-be17-a4c4d3b6305a", 00:07:52.447 "numa_id": 1, 00:07:52.447 "assigned_rate_limits": { 00:07:52.447 "rw_ios_per_sec": 0, 00:07:52.447 "rw_mbytes_per_sec": 0, 00:07:52.447 "r_mbytes_per_sec": 0, 00:07:52.447 "w_mbytes_per_sec": 0 00:07:52.447 }, 00:07:52.447 "claimed": false, 00:07:52.447 "zoned": false, 00:07:52.447 "supported_io_types": { 00:07:52.447 "read": true, 00:07:52.447 "write": true, 00:07:52.447 "unmap": true, 00:07:52.447 "flush": true, 00:07:52.447 "reset": true, 00:07:52.447 "nvme_admin": true, 00:07:52.447 "nvme_io": true, 00:07:52.447 "nvme_io_md": false, 00:07:52.447 "write_zeroes": true, 00:07:52.447 "zcopy": false, 00:07:52.447 "get_zone_info": false, 00:07:52.447 "zone_management": false, 00:07:52.447 "zone_append": false, 00:07:52.447 "compare": true, 00:07:52.447 "compare_and_write": true, 00:07:52.447 "abort": true, 00:07:52.447 "seek_hole": false, 00:07:52.447 "seek_data": false, 00:07:52.447 "copy": true, 00:07:52.447 "nvme_iov_md": false 00:07:52.447 }, 00:07:52.447 "memory_domains": [ 00:07:52.447 { 00:07:52.447 "dma_device_id": "system", 00:07:52.447 "dma_device_type": 1 00:07:52.447 } 00:07:52.447 ], 00:07:52.447 "driver_specific": { 00:07:52.447 "nvme": [ 00:07:52.447 { 00:07:52.447 "trid": { 00:07:52.447 "trtype": "TCP", 00:07:52.447 "adrfam": "IPv4", 00:07:52.447 "traddr": "10.0.0.2", 00:07:52.447 "trsvcid": "4420", 00:07:52.447 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:52.447 }, 00:07:52.447 "ctrlr_data": { 00:07:52.447 "cntlid": 1, 00:07:52.447 "vendor_id": "0x8086", 00:07:52.447 "model_number": "SPDK bdev Controller", 00:07:52.447 "serial_number": "SPDK0", 00:07:52.447 "firmware_revision": "25.01", 00:07:52.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.447 "oacs": { 00:07:52.447 "security": 0, 00:07:52.447 "format": 0, 00:07:52.447 "firmware": 0, 00:07:52.447 "ns_manage": 0 00:07:52.447 }, 00:07:52.447 "multi_ctrlr": true, 00:07:52.447 "ana_reporting": false 00:07:52.447 }, 00:07:52.447 "vs": { 00:07:52.447 "nvme_version": "1.3" 00:07:52.447 }, 00:07:52.447 "ns_data": { 00:07:52.447 "id": 1, 00:07:52.447 "can_share": true 00:07:52.447 } 00:07:52.447 } 00:07:52.447 ], 00:07:52.447 "mp_policy": "active_passive" 00:07:52.447 } 00:07:52.447 } 00:07:52.447 ] 00:07:52.447 09:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=969128 00:07:52.447 09:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:52.447 09:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:52.447 Running I/O for 10 seconds... 00:07:53.822 Latency(us) 00:07:53.822 [2024-11-19T08:09:54.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.822 Nvme0n1 : 1.00 22613.00 88.33 0.00 0.00 0.00 0.00 0.00 00:07:53.822 [2024-11-19T08:09:54.881Z] =================================================================================================================== 00:07:53.822 [2024-11-19T08:09:54.881Z] Total : 22613.00 88.33 0.00 0.00 0.00 0.00 0.00 00:07:53.822 00:07:54.389 09:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3621326d-4326-4eb0-b124-94019189aff0 00:07:54.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.389 Nvme0n1 : 2.00 22752.00 88.88 0.00 0.00 0.00 0.00 0.00 00:07:54.389 [2024-11-19T08:09:55.448Z] =================================================================================================================== 00:07:54.389 [2024-11-19T08:09:55.448Z] Total : 22752.00 88.88 0.00 0.00 0.00 0.00 0.00 00:07:54.389 00:07:54.648 true 00:07:54.648 09:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:07:54.648 09:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:54.906 09:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:54.906 09:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:54.906 09:09:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 969128 00:07:55.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.474 Nvme0n1 : 3.00 22815.67 89.12 0.00 0.00 0.00 0.00 0.00 00:07:55.474 [2024-11-19T08:09:56.533Z] =================================================================================================================== 00:07:55.474 [2024-11-19T08:09:56.533Z] Total : 22815.67 89.12 0.00 0.00 0.00 0.00 0.00 00:07:55.474 00:07:56.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.413 Nvme0n1 : 4.00 22888.50 89.41 0.00 0.00 0.00 0.00 0.00 00:07:56.413 [2024-11-19T08:09:57.472Z] =================================================================================================================== 00:07:56.413 [2024-11-19T08:09:57.472Z] Total : 22888.50 89.41 0.00 0.00 0.00 0.00 0.00 00:07:56.413 00:07:57.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.792 Nvme0n1 : 5.00 22947.60 89.64 0.00 0.00 0.00 0.00 0.00 00:07:57.792 [2024-11-19T08:09:58.851Z] =================================================================================================================== 00:07:57.792 [2024-11-19T08:09:58.851Z] Total : 22947.60 89.64 0.00 0.00 0.00 0.00 0.00 00:07:57.792 00:07:58.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.724 Nvme0n1 : 6.00 22988.33 89.80 0.00 0.00 0.00 0.00 0.00 00:07:58.724 [2024-11-19T08:09:59.783Z] =================================================================================================================== 00:07:58.724 [2024-11-19T08:09:59.783Z] Total : 22988.33 89.80 0.00 0.00 0.00 0.00 0.00 00:07:58.724 00:07:59.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.661 Nvme0n1 : 7.00 22970.00 89.73 0.00 0.00 0.00 0.00 0.00 00:07:59.661 [2024-11-19T08:10:00.720Z] =================================================================================================================== 00:07:59.661 [2024-11-19T08:10:00.720Z] Total : 22970.00 89.73 0.00 0.00 0.00 0.00 0.00 00:07:59.661 00:08:00.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.597 Nvme0n1 : 8.00 22988.00 89.80 0.00 0.00 0.00 0.00 0.00 00:08:00.597 [2024-11-19T08:10:01.656Z] =================================================================================================================== 00:08:00.597 [2024-11-19T08:10:01.656Z] Total : 22988.00 89.80 0.00 0.00 0.00 0.00 0.00 00:08:00.597 00:08:01.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.534 Nvme0n1 : 9.00 23011.44 89.89 0.00 0.00 0.00 0.00 0.00 00:08:01.534 [2024-11-19T08:10:02.593Z] =================================================================================================================== 00:08:01.534 [2024-11-19T08:10:02.593Z] Total : 23011.44 89.89 0.00 0.00 0.00 0.00 0.00 00:08:01.534 00:08:02.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.471 Nvme0n1 : 10.00 23021.40 89.93 0.00 0.00 0.00 0.00 0.00 00:08:02.471 [2024-11-19T08:10:03.530Z] =================================================================================================================== 00:08:02.471 [2024-11-19T08:10:03.530Z] Total : 23021.40 89.93 0.00 0.00 0.00 0.00 0.00 00:08:02.471 00:08:02.471 00:08:02.471 Latency(us) 00:08:02.471 [2024-11-19T08:10:03.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.471 Nvme0n1 : 10.00 23026.04 89.95 0.00 0.00 5556.04 3191.32 14474.91 00:08:02.471 [2024-11-19T08:10:03.530Z] =================================================================================================================== 00:08:02.471 [2024-11-19T08:10:03.530Z] Total : 23026.04 89.95 0.00 0.00 5556.04 3191.32 14474.91 00:08:02.471 { 00:08:02.471 "results": [ 00:08:02.471 { 00:08:02.471 "job": "Nvme0n1", 00:08:02.471 "core_mask": "0x2", 00:08:02.471 "workload": "randwrite", 00:08:02.471 "status": "finished", 00:08:02.471 "queue_depth": 128, 00:08:02.471 "io_size": 4096, 00:08:02.471 "runtime": 10.003545, 00:08:02.471 "iops": 23026.03726978786, 00:08:02.471 "mibps": 89.94545808510883, 00:08:02.471 "io_failed": 0, 00:08:02.471 "io_timeout": 0, 00:08:02.471 "avg_latency_us": 5556.03703196721, 00:08:02.471 "min_latency_us": 3191.318260869565, 00:08:02.471 "max_latency_us": 14474.907826086957 00:08:02.471 } 00:08:02.471 ], 00:08:02.471 "core_count": 1 00:08:02.471 } 00:08:02.471 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 969029 00:08:02.471 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 969029 ']' 00:08:02.471 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 969029 00:08:02.471 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:02.471 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.471 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 969029 00:08:02.730 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:02.730 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:02.730 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 969029' 00:08:02.730 killing process with pid 969029 00:08:02.730 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 969029 00:08:02.730 Received shutdown signal, test time was about 10.000000 seconds 00:08:02.730 00:08:02.730 Latency(us) 00:08:02.730 [2024-11-19T08:10:03.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.730 [2024-11-19T08:10:03.789Z] =================================================================================================================== 00:08:02.730 [2024-11-19T08:10:03.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:02.730 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 969029 00:08:02.730 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.990 09:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 965788 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 965788 00:08:03.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 965788 Killed "${NVMF_APP[@]}" "$@" 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:03.249 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=970982 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 970982 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 970982 ']' 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.508 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.508 [2024-11-19 09:10:04.354085] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:03.508 [2024-11-19 09:10:04.354135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.508 [2024-11-19 09:10:04.430579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.508 [2024-11-19 09:10:04.469309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.508 [2024-11-19 09:10:04.469345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.508 [2024-11-19 09:10:04.469352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.508 [2024-11-19 09:10:04.469358] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.508 [2024-11-19 09:10:04.469364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.508 [2024-11-19 09:10:04.469961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.766 [2024-11-19 09:10:04.787695] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:03.766 [2024-11-19 09:10:04.787792] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:03.766 [2024-11-19 09:10:04.787818] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:08:03.766 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:08:03.767 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:03.767 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:03.767 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:03.767 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:03.767 09:10:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.025 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b da4f3849-51c0-4aca-be17-a4c4d3b6305a -t 2000 00:08:04.283 [ 00:08:04.283 { 00:08:04.283 "name": "da4f3849-51c0-4aca-be17-a4c4d3b6305a", 00:08:04.283 "aliases": [ 00:08:04.283 "lvs/lvol" 00:08:04.283 ], 00:08:04.283 "product_name": "Logical Volume", 00:08:04.283 "block_size": 4096, 00:08:04.283 "num_blocks": 38912, 00:08:04.283 "uuid": "da4f3849-51c0-4aca-be17-a4c4d3b6305a", 00:08:04.283 "assigned_rate_limits": { 00:08:04.283 "rw_ios_per_sec": 0, 00:08:04.283 "rw_mbytes_per_sec": 0, 00:08:04.283 "r_mbytes_per_sec": 0, 00:08:04.283 "w_mbytes_per_sec": 0 00:08:04.283 }, 00:08:04.283 "claimed": false, 00:08:04.283 "zoned": false, 00:08:04.283 "supported_io_types": { 00:08:04.283 "read": true, 00:08:04.283 "write": true, 00:08:04.283 "unmap": true, 00:08:04.283 "flush": false, 00:08:04.283 "reset": true, 00:08:04.283 "nvme_admin": false, 00:08:04.283 "nvme_io": false, 00:08:04.283 "nvme_io_md": false, 00:08:04.283 "write_zeroes": true, 00:08:04.283 "zcopy": false, 00:08:04.283 "get_zone_info": false, 00:08:04.283 "zone_management": false, 00:08:04.283 "zone_append": false, 00:08:04.283 "compare": false, 00:08:04.283 "compare_and_write": false, 00:08:04.283 "abort": false, 00:08:04.283 "seek_hole": true, 00:08:04.283 "seek_data": true, 00:08:04.283 "copy": false, 00:08:04.284 "nvme_iov_md": false 00:08:04.284 }, 00:08:04.284 "driver_specific": { 00:08:04.284 "lvol": { 00:08:04.284 "lvol_store_uuid": "3621326d-4326-4eb0-b124-94019189aff0", 00:08:04.284 "base_bdev": "aio_bdev", 00:08:04.284 "thin_provision": false, 00:08:04.284 "num_allocated_clusters": 38, 00:08:04.284 "snapshot": false, 00:08:04.284 "clone": false, 00:08:04.284 "esnap_clone": false 00:08:04.284 } 00:08:04.284 } 00:08:04.284 } 00:08:04.284 ] 00:08:04.284 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:04.284 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:04.284 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:04.542 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:04.542 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:04.542 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:04.542 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:04.542 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.802 [2024-11-19 09:10:05.756592] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:04.802 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:05.060 request: 00:08:05.060 { 00:08:05.060 "uuid": "3621326d-4326-4eb0-b124-94019189aff0", 00:08:05.060 "method": "bdev_lvol_get_lvstores", 00:08:05.060 "req_id": 1 00:08:05.060 } 00:08:05.060 Got JSON-RPC error response 00:08:05.060 response: 00:08:05.060 { 00:08:05.060 "code": -19, 00:08:05.060 "message": "No such device" 00:08:05.060 } 00:08:05.060 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:05.060 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:05.060 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:05.060 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:05.060 09:10:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:05.317 aio_bdev 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:05.317 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b da4f3849-51c0-4aca-be17-a4c4d3b6305a -t 2000 00:08:05.576 [ 00:08:05.576 { 00:08:05.576 "name": "da4f3849-51c0-4aca-be17-a4c4d3b6305a", 00:08:05.576 "aliases": [ 00:08:05.576 "lvs/lvol" 00:08:05.576 ], 00:08:05.576 "product_name": "Logical Volume", 00:08:05.576 "block_size": 4096, 00:08:05.576 "num_blocks": 38912, 00:08:05.576 "uuid": "da4f3849-51c0-4aca-be17-a4c4d3b6305a", 00:08:05.576 "assigned_rate_limits": { 00:08:05.576 "rw_ios_per_sec": 0, 00:08:05.576 "rw_mbytes_per_sec": 0, 00:08:05.576 "r_mbytes_per_sec": 0, 00:08:05.576 "w_mbytes_per_sec": 0 00:08:05.576 }, 00:08:05.576 "claimed": false, 00:08:05.576 "zoned": false, 00:08:05.576 "supported_io_types": { 00:08:05.576 "read": true, 00:08:05.576 "write": true, 00:08:05.576 "unmap": true, 00:08:05.576 "flush": false, 00:08:05.576 "reset": true, 00:08:05.576 "nvme_admin": false, 00:08:05.576 "nvme_io": false, 00:08:05.576 "nvme_io_md": false, 00:08:05.576 "write_zeroes": true, 00:08:05.576 "zcopy": false, 00:08:05.576 "get_zone_info": false, 00:08:05.576 "zone_management": false, 00:08:05.576 "zone_append": false, 00:08:05.576 "compare": false, 00:08:05.576 "compare_and_write": false, 00:08:05.576 "abort": false, 00:08:05.576 "seek_hole": true, 00:08:05.576 "seek_data": true, 00:08:05.576 "copy": false, 00:08:05.576 "nvme_iov_md": false 00:08:05.576 }, 00:08:05.576 "driver_specific": { 00:08:05.576 "lvol": { 00:08:05.577 "lvol_store_uuid": "3621326d-4326-4eb0-b124-94019189aff0", 00:08:05.577 "base_bdev": "aio_bdev", 00:08:05.577 "thin_provision": false, 00:08:05.577 "num_allocated_clusters": 38, 00:08:05.577 "snapshot": false, 00:08:05.577 "clone": false, 00:08:05.577 "esnap_clone": false 00:08:05.577 } 00:08:05.577 } 00:08:05.577 } 00:08:05.577 ] 00:08:05.577 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:05.577 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:05.577 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:05.835 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:05.835 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:05.835 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:06.094 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:06.094 09:10:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete da4f3849-51c0-4aca-be17-a4c4d3b6305a 00:08:06.094 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3621326d-4326-4eb0-b124-94019189aff0 00:08:06.353 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.612 00:08:06.612 real 0m16.973s 00:08:06.612 user 0m44.041s 00:08:06.612 sys 0m3.766s 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.612 ************************************ 00:08:06.612 END TEST lvs_grow_dirty 00:08:06.612 ************************************ 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:06.612 nvmf_trace.0 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.612 rmmod nvme_tcp 00:08:06.612 rmmod nvme_fabrics 00:08:06.612 rmmod nvme_keyring 00:08:06.612 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 970982 ']' 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 970982 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 970982 ']' 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 970982 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 970982 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 970982' 00:08:06.871 killing process with pid 970982 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 970982 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 970982 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.871 09:10:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.408 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:09.408 00:08:09.408 real 0m41.947s 00:08:09.408 user 1m4.963s 00:08:09.408 sys 0m10.204s 00:08:09.408 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.408 09:10:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.408 ************************************ 00:08:09.408 END TEST nvmf_lvs_grow 00:08:09.408 ************************************ 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.408 ************************************ 00:08:09.408 START TEST nvmf_bdev_io_wait 00:08:09.408 ************************************ 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:09.408 * Looking for test storage... 00:08:09.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.408 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.409 --rc genhtml_branch_coverage=1 00:08:09.409 --rc genhtml_function_coverage=1 00:08:09.409 --rc genhtml_legend=1 00:08:09.409 --rc geninfo_all_blocks=1 00:08:09.409 --rc geninfo_unexecuted_blocks=1 00:08:09.409 00:08:09.409 ' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.409 --rc genhtml_branch_coverage=1 00:08:09.409 --rc genhtml_function_coverage=1 00:08:09.409 --rc genhtml_legend=1 00:08:09.409 --rc geninfo_all_blocks=1 00:08:09.409 --rc geninfo_unexecuted_blocks=1 00:08:09.409 00:08:09.409 ' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.409 --rc genhtml_branch_coverage=1 00:08:09.409 --rc genhtml_function_coverage=1 00:08:09.409 --rc genhtml_legend=1 00:08:09.409 --rc geninfo_all_blocks=1 00:08:09.409 --rc geninfo_unexecuted_blocks=1 00:08:09.409 00:08:09.409 ' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.409 --rc genhtml_branch_coverage=1 00:08:09.409 --rc genhtml_function_coverage=1 00:08:09.409 --rc genhtml_legend=1 00:08:09.409 --rc geninfo_all_blocks=1 00:08:09.409 --rc geninfo_unexecuted_blocks=1 00:08:09.409 00:08:09.409 ' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.409 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.410 09:10:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.980 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:15.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:15.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:15.981 Found net devices under 0000:86:00.0: cvl_0_0 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:15.981 Found net devices under 0000:86:00.1: cvl_0_1 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.981 09:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:08:15.981 00:08:15.981 --- 10.0.0.2 ping statistics --- 00:08:15.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.981 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:08:15.981 00:08:15.981 --- 10.0.0.1 ping statistics --- 00:08:15.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.981 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=975262 00:08:15.981 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 975262 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 975262 ']' 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.982 09:10:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:15.982 [2024-11-19 09:10:16.314505] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:15.982 [2024-11-19 09:10:16.314550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.982 [2024-11-19 09:10:16.394755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.982 [2024-11-19 09:10:16.436964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.982 [2024-11-19 09:10:16.437016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.982 [2024-11-19 09:10:16.437023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.982 [2024-11-19 09:10:16.437029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.982 [2024-11-19 09:10:16.437034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.982 [2024-11-19 09:10:16.438600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.982 [2024-11-19 09:10:16.438709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.982 [2024-11-19 09:10:16.438852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.982 [2024-11-19 09:10:16.438853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.240 [2024-11-19 09:10:17.261500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.240 Malloc0 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.240 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.498 [2024-11-19 09:10:17.309186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=975315 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=975318 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.498 { 00:08:16.498 "params": { 00:08:16.498 "name": "Nvme$subsystem", 00:08:16.498 "trtype": "$TEST_TRANSPORT", 00:08:16.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.498 "adrfam": "ipv4", 00:08:16.498 "trsvcid": "$NVMF_PORT", 00:08:16.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.498 "hdgst": ${hdgst:-false}, 00:08:16.498 "ddgst": ${ddgst:-false} 00:08:16.498 }, 00:08:16.498 "method": "bdev_nvme_attach_controller" 00:08:16.498 } 00:08:16.498 EOF 00:08:16.498 )") 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=975320 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.498 { 00:08:16.498 "params": { 00:08:16.498 "name": "Nvme$subsystem", 00:08:16.498 "trtype": "$TEST_TRANSPORT", 00:08:16.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.498 "adrfam": "ipv4", 00:08:16.498 "trsvcid": "$NVMF_PORT", 00:08:16.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.498 "hdgst": ${hdgst:-false}, 00:08:16.498 "ddgst": ${ddgst:-false} 00:08:16.498 }, 00:08:16.498 "method": "bdev_nvme_attach_controller" 00:08:16.498 } 00:08:16.498 EOF 00:08:16.498 )") 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=975323 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.498 { 00:08:16.498 "params": { 00:08:16.498 "name": "Nvme$subsystem", 00:08:16.498 "trtype": "$TEST_TRANSPORT", 00:08:16.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.498 "adrfam": "ipv4", 00:08:16.498 "trsvcid": "$NVMF_PORT", 00:08:16.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.498 "hdgst": ${hdgst:-false}, 00:08:16.498 "ddgst": ${ddgst:-false} 00:08:16.498 }, 00:08:16.498 "method": "bdev_nvme_attach_controller" 00:08:16.498 } 00:08:16.498 EOF 00:08:16.498 )") 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.498 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:16.499 { 00:08:16.499 "params": { 00:08:16.499 "name": "Nvme$subsystem", 00:08:16.499 "trtype": "$TEST_TRANSPORT", 00:08:16.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.499 "adrfam": "ipv4", 00:08:16.499 "trsvcid": "$NVMF_PORT", 00:08:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.499 "hdgst": ${hdgst:-false}, 00:08:16.499 "ddgst": ${ddgst:-false} 00:08:16.499 }, 00:08:16.499 "method": "bdev_nvme_attach_controller" 00:08:16.499 } 00:08:16.499 EOF 00:08:16.499 )") 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 975315 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.499 "params": { 00:08:16.499 "name": "Nvme1", 00:08:16.499 "trtype": "tcp", 00:08:16.499 "traddr": "10.0.0.2", 00:08:16.499 "adrfam": "ipv4", 00:08:16.499 "trsvcid": "4420", 00:08:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.499 "hdgst": false, 00:08:16.499 "ddgst": false 00:08:16.499 }, 00:08:16.499 "method": "bdev_nvme_attach_controller" 00:08:16.499 }' 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.499 "params": { 00:08:16.499 "name": "Nvme1", 00:08:16.499 "trtype": "tcp", 00:08:16.499 "traddr": "10.0.0.2", 00:08:16.499 "adrfam": "ipv4", 00:08:16.499 "trsvcid": "4420", 00:08:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.499 "hdgst": false, 00:08:16.499 "ddgst": false 00:08:16.499 }, 00:08:16.499 "method": "bdev_nvme_attach_controller" 00:08:16.499 }' 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.499 "params": { 00:08:16.499 "name": "Nvme1", 00:08:16.499 "trtype": "tcp", 00:08:16.499 "traddr": "10.0.0.2", 00:08:16.499 "adrfam": "ipv4", 00:08:16.499 "trsvcid": "4420", 00:08:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.499 "hdgst": false, 00:08:16.499 "ddgst": false 00:08:16.499 }, 00:08:16.499 "method": "bdev_nvme_attach_controller" 00:08:16.499 }' 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:16.499 09:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:16.499 "params": { 00:08:16.499 "name": "Nvme1", 00:08:16.499 "trtype": "tcp", 00:08:16.499 "traddr": "10.0.0.2", 00:08:16.499 "adrfam": "ipv4", 00:08:16.499 "trsvcid": "4420", 00:08:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.499 "hdgst": false, 00:08:16.499 "ddgst": false 00:08:16.499 }, 00:08:16.499 "method": "bdev_nvme_attach_controller" 00:08:16.499 }' 00:08:16.499 [2024-11-19 09:10:17.359324] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:16.499 [2024-11-19 09:10:17.359377] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:16.499 [2024-11-19 09:10:17.360059] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:16.499 [2024-11-19 09:10:17.360102] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:16.499 [2024-11-19 09:10:17.364285] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:16.499 [2024-11-19 09:10:17.364334] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:16.499 [2024-11-19 09:10:17.364903] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:16.499 [2024-11-19 09:10:17.364946] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:16.499 [2024-11-19 09:10:17.547485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.755 [2024-11-19 09:10:17.590736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:16.755 [2024-11-19 09:10:17.645973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.755 [2024-11-19 09:10:17.688863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:16.755 [2024-11-19 09:10:17.742380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.755 [2024-11-19 09:10:17.802649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:16.755 [2024-11-19 09:10:17.803341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.012 [2024-11-19 09:10:17.846001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:17.012 Running I/O for 1 seconds... 00:08:17.012 Running I/O for 1 seconds... 00:08:17.012 Running I/O for 1 seconds... 00:08:17.270 Running I/O for 1 seconds... 00:08:17.835 15419.00 IOPS, 60.23 MiB/s 00:08:17.835 Latency(us) 00:08:17.835 [2024-11-19T08:10:18.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.835 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:17.835 Nvme1n1 : 1.01 15482.54 60.48 0.00 0.00 8246.46 3533.25 15614.66 00:08:17.835 [2024-11-19T08:10:18.894Z] =================================================================================================================== 00:08:17.835 [2024-11-19T08:10:18.894Z] Total : 15482.54 60.48 0.00 0.00 8246.46 3533.25 15614.66 00:08:18.093 6358.00 IOPS, 24.84 MiB/s 00:08:18.093 Latency(us) 00:08:18.093 [2024-11-19T08:10:19.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.093 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:18.093 Nvme1n1 : 1.01 6408.22 25.03 0.00 0.00 19811.95 7864.32 30089.57 00:08:18.093 [2024-11-19T08:10:19.152Z] =================================================================================================================== 00:08:18.093 [2024-11-19T08:10:19.152Z] Total : 6408.22 25.03 0.00 0.00 19811.95 7864.32 30089.57 00:08:18.093 246472.00 IOPS, 962.78 MiB/s 00:08:18.093 Latency(us) 00:08:18.093 [2024-11-19T08:10:19.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.093 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:18.093 Nvme1n1 : 1.00 246087.74 961.28 0.00 0.00 517.44 233.29 1531.55 00:08:18.093 [2024-11-19T08:10:19.152Z] =================================================================================================================== 00:08:18.093 [2024-11-19T08:10:19.152Z] Total : 246087.74 961.28 0.00 0.00 517.44 233.29 1531.55 00:08:18.093 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 975318 00:08:18.093 6933.00 IOPS, 27.08 MiB/s 00:08:18.093 Latency(us) 00:08:18.093 [2024-11-19T08:10:19.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.093 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:18.093 Nvme1n1 : 1.01 7021.32 27.43 0.00 0.00 18173.29 4701.50 47869.77 00:08:18.093 [2024-11-19T08:10:19.152Z] =================================================================================================================== 00:08:18.093 [2024-11-19T08:10:19.152Z] Total : 7021.32 27.43 0.00 0.00 18173.29 4701.50 47869.77 00:08:18.353 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 975320 00:08:18.353 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 975323 00:08:18.353 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.353 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.354 rmmod nvme_tcp 00:08:18.354 rmmod nvme_fabrics 00:08:18.354 rmmod nvme_keyring 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 975262 ']' 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 975262 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 975262 ']' 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 975262 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:18.354 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:18.355 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 975262 00:08:18.355 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:18.355 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:18.355 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 975262' 00:08:18.355 killing process with pid 975262 00:08:18.355 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 975262 00:08:18.355 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 975262 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.614 09:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.517 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.517 00:08:20.517 real 0m11.524s 00:08:20.517 user 0m19.283s 00:08:20.517 sys 0m6.098s 00:08:20.517 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.517 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.517 ************************************ 00:08:20.517 END TEST nvmf_bdev_io_wait 00:08:20.517 ************************************ 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.776 ************************************ 00:08:20.776 START TEST nvmf_queue_depth 00:08:20.776 ************************************ 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.776 * Looking for test storage... 00:08:20.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:20.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.776 --rc genhtml_branch_coverage=1 00:08:20.776 --rc genhtml_function_coverage=1 00:08:20.776 --rc genhtml_legend=1 00:08:20.776 --rc geninfo_all_blocks=1 00:08:20.776 --rc geninfo_unexecuted_blocks=1 00:08:20.776 00:08:20.776 ' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:20.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.776 --rc genhtml_branch_coverage=1 00:08:20.776 --rc genhtml_function_coverage=1 00:08:20.776 --rc genhtml_legend=1 00:08:20.776 --rc geninfo_all_blocks=1 00:08:20.776 --rc geninfo_unexecuted_blocks=1 00:08:20.776 00:08:20.776 ' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:20.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.776 --rc genhtml_branch_coverage=1 00:08:20.776 --rc genhtml_function_coverage=1 00:08:20.776 --rc genhtml_legend=1 00:08:20.776 --rc geninfo_all_blocks=1 00:08:20.776 --rc geninfo_unexecuted_blocks=1 00:08:20.776 00:08:20.776 ' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:20.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.776 --rc genhtml_branch_coverage=1 00:08:20.776 --rc genhtml_function_coverage=1 00:08:20.776 --rc genhtml_legend=1 00:08:20.776 --rc geninfo_all_blocks=1 00:08:20.776 --rc geninfo_unexecuted_blocks=1 00:08:20.776 00:08:20.776 ' 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.776 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.035 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.035 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.036 09:10:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:27.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:27.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:27.601 Found net devices under 0000:86:00.0: cvl_0_0 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:27.601 Found net devices under 0000:86:00.1: cvl_0_1 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.601 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:08:27.601 00:08:27.601 --- 10.0.0.2 ping statistics --- 00:08:27.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.601 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:27.602 00:08:27.602 --- 10.0.0.1 ping statistics --- 00:08:27.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.602 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=979307 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 979307 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 979307 ']' 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.602 09:10:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 [2024-11-19 09:10:27.901775] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:27.602 [2024-11-19 09:10:27.901821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.602 [2024-11-19 09:10:27.982413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.602 [2024-11-19 09:10:28.021900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.602 [2024-11-19 09:10:28.021935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.602 [2024-11-19 09:10:28.021941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.602 [2024-11-19 09:10:28.021951] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.602 [2024-11-19 09:10:28.021974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.602 [2024-11-19 09:10:28.022522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 [2024-11-19 09:10:28.165321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 Malloc0 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 [2024-11-19 09:10:28.215716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=979329 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 979329 /var/tmp/bdevperf.sock 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 979329 ']' 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 [2024-11-19 09:10:28.268108] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:27.602 [2024-11-19 09:10:28.268147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid979329 ] 00:08:27.602 [2024-11-19 09:10:28.344307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.602 [2024-11-19 09:10:28.385340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.602 NVMe0n1 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.602 09:10:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.860 Running I/O for 10 seconds... 00:08:29.727 11463.00 IOPS, 44.78 MiB/s [2024-11-19T08:10:31.721Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-19T08:10:33.094Z] 11938.33 IOPS, 46.63 MiB/s [2024-11-19T08:10:34.028Z] 12061.75 IOPS, 47.12 MiB/s [2024-11-19T08:10:34.960Z] 12147.80 IOPS, 47.45 MiB/s [2024-11-19T08:10:35.894Z] 12223.00 IOPS, 47.75 MiB/s [2024-11-19T08:10:36.830Z] 12255.29 IOPS, 47.87 MiB/s [2024-11-19T08:10:37.766Z] 12265.00 IOPS, 47.91 MiB/s [2024-11-19T08:10:39.140Z] 12268.11 IOPS, 47.92 MiB/s [2024-11-19T08:10:39.140Z] 12270.60 IOPS, 47.93 MiB/s 00:08:38.081 Latency(us) 00:08:38.081 [2024-11-19T08:10:39.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.081 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:38.081 Verification LBA range: start 0x0 length 0x4000 00:08:38.081 NVMe0n1 : 10.05 12305.98 48.07 0.00 0.00 82945.65 14930.81 55620.12 00:08:38.081 [2024-11-19T08:10:39.140Z] =================================================================================================================== 00:08:38.081 [2024-11-19T08:10:39.140Z] Total : 12305.98 48.07 0.00 0.00 82945.65 14930.81 55620.12 00:08:38.081 { 00:08:38.081 "results": [ 00:08:38.081 { 00:08:38.081 "job": "NVMe0n1", 00:08:38.081 "core_mask": "0x1", 00:08:38.081 "workload": "verify", 00:08:38.081 "status": "finished", 00:08:38.081 "verify_range": { 00:08:38.081 "start": 0, 00:08:38.081 "length": 16384 00:08:38.081 }, 00:08:38.081 "queue_depth": 1024, 00:08:38.081 "io_size": 4096, 00:08:38.081 "runtime": 10.054458, 00:08:38.081 "iops": 12305.984071941024, 00:08:38.081 "mibps": 48.070250281019625, 00:08:38.081 "io_failed": 0, 00:08:38.081 "io_timeout": 0, 00:08:38.081 "avg_latency_us": 82945.64822191378, 00:08:38.081 "min_latency_us": 14930.810434782608, 00:08:38.081 "max_latency_us": 55620.11826086957 00:08:38.081 } 00:08:38.081 ], 00:08:38.081 "core_count": 1 00:08:38.081 } 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 979329 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 979329 ']' 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 979329 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 979329 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 979329' 00:08:38.081 killing process with pid 979329 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 979329 00:08:38.081 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.081 00:08:38.081 Latency(us) 00:08:38.081 [2024-11-19T08:10:39.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.081 [2024-11-19T08:10:39.140Z] =================================================================================================================== 00:08:38.081 [2024-11-19T08:10:39.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.081 09:10:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 979329 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.081 rmmod nvme_tcp 00:08:38.081 rmmod nvme_fabrics 00:08:38.081 rmmod nvme_keyring 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 979307 ']' 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 979307 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 979307 ']' 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 979307 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 979307 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 979307' 00:08:38.081 killing process with pid 979307 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 979307 00:08:38.081 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 979307 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.340 09:10:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.874 00:08:40.874 real 0m19.743s 00:08:40.874 user 0m23.108s 00:08:40.874 sys 0m6.047s 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:40.874 ************************************ 00:08:40.874 END TEST nvmf_queue_depth 00:08:40.874 ************************************ 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.874 ************************************ 00:08:40.874 START TEST nvmf_target_multipath 00:08:40.874 ************************************ 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:40.874 * Looking for test storage... 00:08:40.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:40.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.874 --rc genhtml_branch_coverage=1 00:08:40.874 --rc genhtml_function_coverage=1 00:08:40.874 --rc genhtml_legend=1 00:08:40.874 --rc geninfo_all_blocks=1 00:08:40.874 --rc geninfo_unexecuted_blocks=1 00:08:40.874 00:08:40.874 ' 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.874 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.875 09:10:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.454 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:47.455 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:47.455 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:47.455 Found net devices under 0000:86:00.0: cvl_0_0 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:47.455 Found net devices under 0000:86:00.1: cvl_0_1 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.455 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:08:47.456 00:08:47.456 --- 10.0.0.2 ping statistics --- 00:08:47.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.456 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:08:47.456 00:08:47.456 --- 10.0.0.1 ping statistics --- 00:08:47.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.456 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:47.456 only one NIC for nvmf test 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.456 rmmod nvme_tcp 00:08:47.456 rmmod nvme_fabrics 00:08:47.456 rmmod nvme_keyring 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.456 09:10:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.087 00:08:49.087 real 0m8.422s 00:08:49.087 user 0m1.877s 00:08:49.087 sys 0m4.561s 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.087 ************************************ 00:08:49.087 END TEST nvmf_target_multipath 00:08:49.087 ************************************ 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.087 ************************************ 00:08:49.087 START TEST nvmf_zcopy 00:08:49.087 ************************************ 00:08:49.087 09:10:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:49.087 * Looking for test storage... 00:08:49.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.087 --rc genhtml_branch_coverage=1 00:08:49.087 --rc genhtml_function_coverage=1 00:08:49.087 --rc genhtml_legend=1 00:08:49.087 --rc geninfo_all_blocks=1 00:08:49.087 --rc geninfo_unexecuted_blocks=1 00:08:49.087 00:08:49.087 ' 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.087 --rc genhtml_branch_coverage=1 00:08:49.087 --rc genhtml_function_coverage=1 00:08:49.087 --rc genhtml_legend=1 00:08:49.087 --rc geninfo_all_blocks=1 00:08:49.087 --rc geninfo_unexecuted_blocks=1 00:08:49.087 00:08:49.087 ' 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.087 --rc genhtml_branch_coverage=1 00:08:49.087 --rc genhtml_function_coverage=1 00:08:49.087 --rc genhtml_legend=1 00:08:49.087 --rc geninfo_all_blocks=1 00:08:49.087 --rc geninfo_unexecuted_blocks=1 00:08:49.087 00:08:49.087 ' 00:08:49.087 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.088 --rc genhtml_branch_coverage=1 00:08:49.088 --rc genhtml_function_coverage=1 00:08:49.088 --rc genhtml_legend=1 00:08:49.088 --rc geninfo_all_blocks=1 00:08:49.088 --rc geninfo_unexecuted_blocks=1 00:08:49.088 00:08:49.088 ' 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.088 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.347 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.348 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.348 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.348 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:49.348 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:49.348 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.348 09:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:55.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:55.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:55.917 Found net devices under 0000:86:00.0: cvl_0_0 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:55.917 Found net devices under 0000:86:00.1: cvl_0_1 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.917 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.918 09:10:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:08:55.918 00:08:55.918 --- 10.0.0.2 ping statistics --- 00:08:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.918 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:08:55.918 00:08:55.918 --- 10.0.0.1 ping statistics --- 00:08:55.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.918 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=988238 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 988238 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 988238 ']' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 [2024-11-19 09:10:56.219842] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:55.918 [2024-11-19 09:10:56.219893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.918 [2024-11-19 09:10:56.300458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.918 [2024-11-19 09:10:56.340284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.918 [2024-11-19 09:10:56.340321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.918 [2024-11-19 09:10:56.340329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.918 [2024-11-19 09:10:56.340335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.918 [2024-11-19 09:10:56.340340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.918 [2024-11-19 09:10:56.340879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 [2024-11-19 09:10:56.487933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 [2024-11-19 09:10:56.508152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.918 malloc0 00:08:55.918 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.919 { 00:08:55.919 "params": { 00:08:55.919 "name": "Nvme$subsystem", 00:08:55.919 "trtype": "$TEST_TRANSPORT", 00:08:55.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.919 "adrfam": "ipv4", 00:08:55.919 "trsvcid": "$NVMF_PORT", 00:08:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.919 "hdgst": ${hdgst:-false}, 00:08:55.919 "ddgst": ${ddgst:-false} 00:08:55.919 }, 00:08:55.919 "method": "bdev_nvme_attach_controller" 00:08:55.919 } 00:08:55.919 EOF 00:08:55.919 )") 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:55.919 09:10:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.919 "params": { 00:08:55.919 "name": "Nvme1", 00:08:55.919 "trtype": "tcp", 00:08:55.919 "traddr": "10.0.0.2", 00:08:55.919 "adrfam": "ipv4", 00:08:55.919 "trsvcid": "4420", 00:08:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.919 "hdgst": false, 00:08:55.919 "ddgst": false 00:08:55.919 }, 00:08:55.919 "method": "bdev_nvme_attach_controller" 00:08:55.919 }' 00:08:55.919 [2024-11-19 09:10:56.594228] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:08:55.919 [2024-11-19 09:10:56.594269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988258 ] 00:08:55.919 [2024-11-19 09:10:56.666999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.919 [2024-11-19 09:10:56.709192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.919 Running I/O for 10 seconds... 00:08:57.860 8446.00 IOPS, 65.98 MiB/s [2024-11-19T08:11:00.293Z] 8512.00 IOPS, 66.50 MiB/s [2024-11-19T08:11:01.228Z] 8539.67 IOPS, 66.72 MiB/s [2024-11-19T08:11:02.163Z] 8511.50 IOPS, 66.50 MiB/s [2024-11-19T08:11:03.097Z] 8531.60 IOPS, 66.65 MiB/s [2024-11-19T08:11:04.031Z] 8542.33 IOPS, 66.74 MiB/s [2024-11-19T08:11:04.965Z] 8544.43 IOPS, 66.75 MiB/s [2024-11-19T08:11:06.340Z] 8552.50 IOPS, 66.82 MiB/s [2024-11-19T08:11:07.275Z] 8559.11 IOPS, 66.87 MiB/s [2024-11-19T08:11:07.275Z] 8564.50 IOPS, 66.91 MiB/s 00:09:06.216 Latency(us) 00:09:06.216 [2024-11-19T08:11:07.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.216 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:06.216 Verification LBA range: start 0x0 length 0x1000 00:09:06.216 Nvme1n1 : 10.01 8567.86 66.94 0.00 0.00 14897.19 2364.99 24162.84 00:09:06.216 [2024-11-19T08:11:07.275Z] =================================================================================================================== 00:09:06.216 [2024-11-19T08:11:07.275Z] Total : 8567.86 66.94 0.00 0.00 14897.19 2364.99 24162.84 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=990090 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.216 { 00:09:06.216 "params": { 00:09:06.216 "name": "Nvme$subsystem", 00:09:06.216 "trtype": "$TEST_TRANSPORT", 00:09:06.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.216 "adrfam": "ipv4", 00:09:06.216 "trsvcid": "$NVMF_PORT", 00:09:06.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.216 "hdgst": ${hdgst:-false}, 00:09:06.216 "ddgst": ${ddgst:-false} 00:09:06.216 }, 00:09:06.216 "method": "bdev_nvme_attach_controller" 00:09:06.216 } 00:09:06.216 EOF 00:09:06.216 )") 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:06.216 [2024-11-19 09:11:07.113328] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-11-19 09:11:07.113364] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:06.216 09:11:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.216 "params": { 00:09:06.216 "name": "Nvme1", 00:09:06.216 "trtype": "tcp", 00:09:06.216 "traddr": "10.0.0.2", 00:09:06.216 "adrfam": "ipv4", 00:09:06.216 "trsvcid": "4420", 00:09:06.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.216 "hdgst": false, 00:09:06.216 "ddgst": false 00:09:06.216 }, 00:09:06.216 "method": "bdev_nvme_attach_controller" 00:09:06.216 }' 00:09:06.216 [2024-11-19 09:11:07.125333] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.216 [2024-11-19 09:11:07.125348] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.216 [2024-11-19 09:11:07.137351] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.137361] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.149385] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.149395] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.153054] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:09:06.217 [2024-11-19 09:11:07.153095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990090 ] 00:09:06.217 [2024-11-19 09:11:07.161416] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.161426] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.173450] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.173459] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.185483] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.185493] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.197524] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.197534] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.209558] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.209567] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.221589] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.221600] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.226414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.217 [2024-11-19 09:11:07.233621] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.233632] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.245657] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.245672] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.257716] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.257735] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.217 [2024-11-19 09:11:07.268292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.217 [2024-11-19 09:11:07.269721] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.217 [2024-11-19 09:11:07.269734] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.281780] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.281807] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.293790] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.293806] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.305818] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.305836] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.317848] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.317861] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.329881] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.329897] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.341910] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.341921] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.353943] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.353957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.365999] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.366020] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.378024] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.378039] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.390052] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.390064] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.402076] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.402085] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.414107] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.414116] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.426150] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.426164] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.438183] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.438196] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.450213] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.450225] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.462262] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.462281] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 Running I/O for 5 seconds... 00:09:06.475 [2024-11-19 09:11:07.474276] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.475 [2024-11-19 09:11:07.474287] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.475 [2024-11-19 09:11:07.486254] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-11-19 09:11:07.486273] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-11-19 09:11:07.495790] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-11-19 09:11:07.495808] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-11-19 09:11:07.504830] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-11-19 09:11:07.504848] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-11-19 09:11:07.519881] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-11-19 09:11:07.519899] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.476 [2024-11-19 09:11:07.531205] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.476 [2024-11-19 09:11:07.531225] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.540666] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.540686] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.549474] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.549493] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.558813] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.558832] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.573591] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.573610] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.584714] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.584733] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.593551] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.593569] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.603155] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.603177] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.612062] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.612086] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.626741] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.626759] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.635942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.635969] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.650376] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.650407] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.659453] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.659471] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.668961] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.668979] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.683252] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.683272] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.696968] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.696987] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.705835] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.705853] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.715909] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.715927] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.725239] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.725257] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.734 [2024-11-19 09:11:07.740065] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.734 [2024-11-19 09:11:07.740083] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.735 [2024-11-19 09:11:07.753785] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.735 [2024-11-19 09:11:07.753804] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.735 [2024-11-19 09:11:07.762750] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.735 [2024-11-19 09:11:07.762768] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.735 [2024-11-19 09:11:07.777391] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.735 [2024-11-19 09:11:07.777412] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.735 [2024-11-19 09:11:07.791312] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.735 [2024-11-19 09:11:07.791335] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.805779] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.805801] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.816612] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.816632] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.826184] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.826209] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.835047] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.835066] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.844347] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.844366] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.859489] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.859508] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.870383] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.870402] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.879074] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.879094] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.893912] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.893931] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.904864] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.904883] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.919349] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.919369] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.928255] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.928275] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.943152] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.943171] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.958121] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.958139] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.967382] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.967400] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.981965] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.981984] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:07.991013] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:07.991032] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:08.000732] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:08.000752] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:08.010294] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:08.010313] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:08.019685] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:08.019704] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:08.034212] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.993 [2024-11-19 09:11:08.034230] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.993 [2024-11-19 09:11:08.043372] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.994 [2024-11-19 09:11:08.043394] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.057990] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.058011] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.069475] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.069494] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.083810] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.083830] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.098194] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.098214] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.105841] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.105860] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.119635] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.119654] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.128853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.128872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.143582] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.143602] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.154432] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.154451] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.163788] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.163807] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.173291] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.173309] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.187934] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.187957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.197079] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.197097] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.211302] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.211321] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.220286] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.220304] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.229781] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.229799] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.238568] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.238586] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.247505] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.247523] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.261938] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.261971] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.275596] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.275615] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.284585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.284604] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.293958] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.293977] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.253 [2024-11-19 09:11:08.303253] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.253 [2024-11-19 09:11:08.303271] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.318489] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.318512] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.333546] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.333565] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.342362] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.342380] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.351128] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.351146] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.360549] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.360568] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.374855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.374874] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.383944] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.383969] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.393552] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.393570] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.403324] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.403342] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.412243] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.412262] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.426642] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.426661] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.435652] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.435671] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.444516] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.444534] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.453360] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.453378] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.462705] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.462724] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 16408.00 IOPS, 128.19 MiB/s [2024-11-19T08:11:08.571Z] [2024-11-19 09:11:08.477189] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.477207] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.486099] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.486118] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.500583] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.500601] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.514297] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.514316] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.523670] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.523688] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.538000] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.538018] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.546809] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.546827] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.556288] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.556307] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.512 [2024-11-19 09:11:08.565894] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.512 [2024-11-19 09:11:08.565913] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.580523] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.580543] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.594254] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.594273] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.601856] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.601875] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.612478] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.612498] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.621470] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.621489] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.630768] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.630786] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.645130] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.645149] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.659211] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.659230] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.669635] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.669653] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.678383] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.678400] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.687855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.687873] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.702045] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.702063] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.716187] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.716205] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.725264] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.725283] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.734841] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.734859] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.744386] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.744405] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.754007] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.754025] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.763324] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.763342] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.772238] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.772257] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.781697] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.781715] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.791046] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.791065] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.805541] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.805559] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.814400] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.814419] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.771 [2024-11-19 09:11:08.823721] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.771 [2024-11-19 09:11:08.823739] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.838590] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.838611] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.852319] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.852338] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.866023] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.866042] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.880063] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.880086] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.889099] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.889117] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.903272] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.903291] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.912303] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.912322] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.927164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.927185] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.937922] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.937942] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.945581] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.945600] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.955443] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.955461] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.965021] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.965040] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.979410] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.979429] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.988633] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.988651] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.030 [2024-11-19 09:11:08.998157] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.030 [2024-11-19 09:11:08.998176] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.007611] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.007629] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.016397] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.016415] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.030937] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.030962] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.044824] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.044842] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.056197] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.056216] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.065539] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.065557] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.031 [2024-11-19 09:11:09.079707] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.031 [2024-11-19 09:11:09.079725] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.093524] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.093548] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.107608] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.107628] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.116765] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.116785] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.125553] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.125571] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.134433] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.134451] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.149078] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.149098] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.158374] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.158394] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.167320] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.167338] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.176704] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.176722] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.185931] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.185957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.200257] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.200276] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.209184] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.209203] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.218413] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.218431] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.227780] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.227800] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.237176] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.237195] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.251856] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.251875] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.260931] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.261112] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.269832] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.269851] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.279195] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.279214] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.288404] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.288427] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.302585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.302604] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.316327] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.316346] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.325344] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.325363] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.334338] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.334357] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.289 [2024-11-19 09:11:09.343642] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.289 [2024-11-19 09:11:09.343663] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.358364] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.358385] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.367565] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.367584] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.376985] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.377004] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.386439] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.386459] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.395612] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.395631] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.410191] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.410211] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.419289] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.419308] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.428617] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.428636] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.442796] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.442815] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.456388] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.456407] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.470473] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.470492] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 16527.00 IOPS, 129.12 MiB/s [2024-11-19T08:11:09.607Z] [2024-11-19 09:11:09.484268] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.484287] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.498269] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.498288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.507231] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.507250] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.516736] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.516754] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.531142] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.531161] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.538991] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.539011] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.548811] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.548830] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.557997] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.558015] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.566796] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.566814] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.581613] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.581631] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.548 [2024-11-19 09:11:09.592742] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.548 [2024-11-19 09:11:09.592760] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.607414] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.607435] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.621470] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.621490] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.635323] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.635342] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.649099] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.649118] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.657987] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.658005] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.667339] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.667357] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.676929] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.676954] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.686147] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.686165] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.701012] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.701030] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.710022] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.710040] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.718877] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.718895] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.728355] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.728373] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.737554] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.737572] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.752258] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.752277] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.761170] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.761189] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.769991] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.770009] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.779181] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.779199] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.787878] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.787896] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.802184] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.802203] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.811219] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.811238] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.820719] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.820737] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.835415] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.835433] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.844509] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.844527] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.807 [2024-11-19 09:11:09.858610] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.807 [2024-11-19 09:11:09.858629] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.066 [2024-11-19 09:11:09.868107] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.868128] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.877911] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.877930] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.887221] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.887239] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.896070] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.896089] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.910813] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.910832] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.919880] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.919899] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.929717] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.929736] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.939843] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.939862] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.949065] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.949083] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.963971] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.963990] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.974868] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.974886] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.984155] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.984174] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:09.993447] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:09.993465] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.003496] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.003515] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.018530] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.018551] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.029825] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.029844] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.044921] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.044940] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.054202] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.054221] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.062883] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.062901] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.071615] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.071634] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.081964] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.081985] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.090888] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.090907] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.105855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.105873] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.067 [2024-11-19 09:11:10.117475] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.067 [2024-11-19 09:11:10.117494] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.132722] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.132743] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.147907] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.147927] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.162080] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.162099] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.176046] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.176065] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.185169] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.185187] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.199520] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.199539] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.208742] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.208760] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.218328] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.325 [2024-11-19 09:11:10.218346] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.325 [2024-11-19 09:11:10.227425] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.227444] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.237492] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.237510] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.252069] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.252088] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.266433] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.266451] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.275692] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.275710] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.285041] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.285059] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.293747] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.293766] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.308585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.308604] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.322508] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.322527] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.331566] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.331584] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.340619] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.340641] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.350089] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.350107] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.364971] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.364990] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.326 [2024-11-19 09:11:10.376165] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.326 [2024-11-19 09:11:10.376184] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.391210] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.391230] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.402284] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.402303] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.411794] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.411813] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.426933] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.426959] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.442068] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.442087] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.451392] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.451410] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.460841] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.460859] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.470819] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.470836] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 16515.67 IOPS, 129.03 MiB/s [2024-11-19T08:11:10.643Z] [2024-11-19 09:11:10.485548] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.485566] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.493408] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.493426] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.506872] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.506890] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.516745] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.516763] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.526314] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.526333] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.541367] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.541385] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.556332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.556351] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.565591] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.565614] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.575125] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.575144] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.589937] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.589963] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.600644] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.600664] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.609392] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.609411] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.584 [2024-11-19 09:11:10.618888] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.584 [2024-11-19 09:11:10.618906] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.585 [2024-11-19 09:11:10.633445] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.585 [2024-11-19 09:11:10.633464] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.647342] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.647364] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.661532] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.661551] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.670588] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.670607] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.679419] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.679437] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.688892] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.688910] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.698215] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.698234] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.712554] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.712573] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.722250] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.722269] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.736570] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.736589] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.745470] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.745489] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.754992] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.755010] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.769657] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.769677] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.778506] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.778529] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.787344] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.787362] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.796174] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.796192] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.805888] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.805906] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.815488] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.815506] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.824727] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.824745] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.833987] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.834005] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.843437] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.843455] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.852792] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.852810] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.862221] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.862239] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.871757] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.871774] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.881042] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.881059] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.844 [2024-11-19 09:11:10.890496] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.844 [2024-11-19 09:11:10.890514] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.905512] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.905533] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.916532] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.916552] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.926124] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.926143] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.940551] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.940570] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.954261] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.954280] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.968380] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.968398] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.982098] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.982116] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:10.996156] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:10.996174] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.005122] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.005140] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.014649] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.014666] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.029378] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.029396] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.040670] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.040688] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.054914] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.054933] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.063652] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.063671] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.072866] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.072885] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.082091] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.082109] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.096638] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.096656] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.105556] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.105575] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.115126] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.115145] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.124411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.124429] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.133740] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.133758] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.103 [2024-11-19 09:11:11.148332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.103 [2024-11-19 09:11:11.148352] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.162956] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.162976] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.178217] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.178237] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.192290] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.192309] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.206434] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.206453] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.218164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.218182] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.227651] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.227670] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.236430] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.236449] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.246195] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.246214] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.255820] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.255839] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.270693] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.270712] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.286378] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.286396] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.295287] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.295305] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.304569] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.304588] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.314054] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.314071] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.328737] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.328755] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.337800] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.337818] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.347360] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.347378] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.356519] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.356537] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.365864] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.365882] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.380540] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.380558] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.395064] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.395082] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.362 [2024-11-19 09:11:11.405944] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.362 [2024-11-19 09:11:11.405967] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.620 [2024-11-19 09:11:11.420939] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.620 [2024-11-19 09:11:11.420968] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.620 [2024-11-19 09:11:11.436662] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.620 [2024-11-19 09:11:11.436682] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.620 [2024-11-19 09:11:11.450710] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.450730] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.459906] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.459926] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.469456] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.469475] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 16519.75 IOPS, 129.06 MiB/s [2024-11-19T08:11:11.680Z] [2024-11-19 09:11:11.483874] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.483893] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.492940] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.492965] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.507737] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.507756] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.518959] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.518977] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.533398] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.533417] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.542559] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.542577] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.556938] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.556961] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.570634] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.570652] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.579379] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.579397] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.588743] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.588761] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.598041] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.598059] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.606771] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.606789] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.621662] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.621681] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.633242] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.633266] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.642221] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.642240] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.651505] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.651524] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.660919] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.660938] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.621 [2024-11-19 09:11:11.670514] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.621 [2024-11-19 09:11:11.670532] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.685953] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.685974] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.700998] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.701016] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.710232] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.710251] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.725021] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.725041] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.735853] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.735872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.750364] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.750382] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.761433] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.761451] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.770267] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.770285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.784521] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.784539] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.798274] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.798292] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.811987] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.812005] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.826302] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.826320] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.835283] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.835303] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.844603] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.844621] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.858923] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.858946] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.868116] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.868134] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.877614] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.877631] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.886937] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.886961] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.901660] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.901678] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.915308] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.915327] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.924186] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.924204] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.880 [2024-11-19 09:11:11.933687] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.880 [2024-11-19 09:11:11.933707] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:11.943197] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:11.943219] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:11.952590] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:11.952609] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:11.967247] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:11.967267] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:11.981433] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:11.981453] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:11.996464] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:11.996483] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.010896] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.010916] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.019981] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.019999] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.034460] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.034479] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.043538] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.043557] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.052408] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.052426] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.061805] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.061823] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.070607] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.070630] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.085141] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.085160] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.094225] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.094244] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.103714] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.103732] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.113135] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.113153] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.122511] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.122529] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.137239] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.137257] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.151270] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.151291] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.165607] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.165627] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.176962] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.176980] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.139 [2024-11-19 09:11:12.185940] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.139 [2024-11-19 09:11:12.185965] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.200760] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.200782] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.208226] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.208244] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.221410] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.221430] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.230964] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.230984] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.245362] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.245381] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.259325] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.259344] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.268557] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.268575] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.282780] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.282799] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.291858] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.291880] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.301265] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.301283] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.315501] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.315519] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.324397] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.324417] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.333910] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.333928] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.343195] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.343213] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.352458] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.352476] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.367242] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.367261] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.376549] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.376567] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.390586] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.390605] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.399755] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.399774] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.409137] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.409155] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.423499] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.423517] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.437368] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.437386] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.398 [2024-11-19 09:11:12.446438] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.398 [2024-11-19 09:11:12.446456] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.456301] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.456321] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.470896] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.470917] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.484212] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.484232] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 16530.20 IOPS, 129.14 MiB/s 00:09:11.658 Latency(us) 00:09:11.658 [2024-11-19T08:11:12.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.658 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:11.658 Nvme1n1 : 5.01 16532.70 129.16 0.00 0.00 7735.09 3376.53 15272.74 00:09:11.658 [2024-11-19T08:11:12.717Z] =================================================================================================================== 00:09:11.658 [2024-11-19T08:11:12.717Z] Total : 16532.70 129.16 0.00 0.00 7735.09 3376.53 15272.74 00:09:11.658 [2024-11-19 09:11:12.494410] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.494429] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.506436] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.506451] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.518511] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.518532] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.530536] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.530555] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.542565] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.542581] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.554597] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.554614] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.566633] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.566652] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.578659] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.578675] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.590689] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.590705] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.602715] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.602726] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.614749] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.614758] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.626785] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.626800] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.638812] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.638821] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 [2024-11-19 09:11:12.650846] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.658 [2024-11-19 09:11:12.650855] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.658 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (990090) - No such process 00:09:11.658 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 990090 00:09:11.658 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.658 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.658 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.658 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.658 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.659 delay0 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.659 09:11:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:11.917 [2024-11-19 09:11:12.855085] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:18.473 Initializing NVMe Controllers 00:09:18.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:18.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:18.473 Initialization complete. Launching workers. 00:09:18.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 104 00:09:18.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 390, failed to submit 34 00:09:18.473 success 190, unsuccessful 200, failed 0 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.473 09:11:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.473 rmmod nvme_tcp 00:09:18.473 rmmod nvme_fabrics 00:09:18.473 rmmod nvme_keyring 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 988238 ']' 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 988238 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 988238 ']' 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 988238 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 988238 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 988238' 00:09:18.473 killing process with pid 988238 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 988238 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 988238 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.473 09:11:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.380 00:09:20.380 real 0m31.354s 00:09:20.380 user 0m41.891s 00:09:20.380 sys 0m11.082s 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 ************************************ 00:09:20.380 END TEST nvmf_zcopy 00:09:20.380 ************************************ 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:20.380 ************************************ 00:09:20.380 START TEST nvmf_nmic 00:09:20.380 ************************************ 00:09:20.380 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:20.640 * Looking for test storage... 00:09:20.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:20.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.640 --rc genhtml_branch_coverage=1 00:09:20.640 --rc genhtml_function_coverage=1 00:09:20.640 --rc genhtml_legend=1 00:09:20.640 --rc geninfo_all_blocks=1 00:09:20.640 --rc geninfo_unexecuted_blocks=1 00:09:20.640 00:09:20.640 ' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:20.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.640 --rc genhtml_branch_coverage=1 00:09:20.640 --rc genhtml_function_coverage=1 00:09:20.640 --rc genhtml_legend=1 00:09:20.640 --rc geninfo_all_blocks=1 00:09:20.640 --rc geninfo_unexecuted_blocks=1 00:09:20.640 00:09:20.640 ' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:20.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.640 --rc genhtml_branch_coverage=1 00:09:20.640 --rc genhtml_function_coverage=1 00:09:20.640 --rc genhtml_legend=1 00:09:20.640 --rc geninfo_all_blocks=1 00:09:20.640 --rc geninfo_unexecuted_blocks=1 00:09:20.640 00:09:20.640 ' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:20.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.640 --rc genhtml_branch_coverage=1 00:09:20.640 --rc genhtml_function_coverage=1 00:09:20.640 --rc genhtml_legend=1 00:09:20.640 --rc geninfo_all_blocks=1 00:09:20.640 --rc geninfo_unexecuted_blocks=1 00:09:20.640 00:09:20.640 ' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.640 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:20.641 09:11:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:27.210 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:27.210 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.210 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:27.210 Found net devices under 0000:86:00.0: cvl_0_0 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:27.211 Found net devices under 0000:86:00.1: cvl_0_1 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:09:27.211 00:09:27.211 --- 10.0.0.2 ping statistics --- 00:09:27.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.211 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:09:27.211 00:09:27.211 --- 10.0.0.1 ping statistics --- 00:09:27.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.211 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=995531 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 995531 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 995531 ']' 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:27.211 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.211 [2024-11-19 09:11:27.554601] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:09:27.211 [2024-11-19 09:11:27.554650] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.211 [2024-11-19 09:11:27.637560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.211 [2024-11-19 09:11:27.681602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.211 [2024-11-19 09:11:27.681639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.211 [2024-11-19 09:11:27.681646] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.211 [2024-11-19 09:11:27.681651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.211 [2024-11-19 09:11:27.681658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.211 [2024-11-19 09:11:27.683284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.211 [2024-11-19 09:11:27.683397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.211 [2024-11-19 09:11:27.683424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.212 [2024-11-19 09:11:27.683433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 [2024-11-19 09:11:27.829396] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 Malloc0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 [2024-11-19 09:11:27.901425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:27.212 test case1: single bdev can't be used in multiple subsystems 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 [2024-11-19 09:11:27.929336] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:27.212 [2024-11-19 09:11:27.929356] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:27.212 [2024-11-19 09:11:27.929363] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.212 request: 00:09:27.212 { 00:09:27.212 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:27.212 "namespace": { 00:09:27.212 "bdev_name": "Malloc0", 00:09:27.212 "no_auto_visible": false 00:09:27.212 }, 00:09:27.212 "method": "nvmf_subsystem_add_ns", 00:09:27.212 "req_id": 1 00:09:27.212 } 00:09:27.212 Got JSON-RPC error response 00:09:27.212 response: 00:09:27.212 { 00:09:27.212 "code": -32602, 00:09:27.212 "message": "Invalid parameters" 00:09:27.212 } 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:27.212 Adding namespace failed - expected result. 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:27.212 test case2: host connect to nvmf target in multiple paths 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 [2024-11-19 09:11:27.941488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.212 09:11:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.145 09:11:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:29.517 09:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.517 09:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:29.517 09:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.517 09:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:29.517 09:11:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:31.414 09:11:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:31.414 [global] 00:09:31.414 thread=1 00:09:31.414 invalidate=1 00:09:31.414 rw=write 00:09:31.414 time_based=1 00:09:31.414 runtime=1 00:09:31.414 ioengine=libaio 00:09:31.414 direct=1 00:09:31.414 bs=4096 00:09:31.414 iodepth=1 00:09:31.414 norandommap=0 00:09:31.414 numjobs=1 00:09:31.414 00:09:31.414 verify_dump=1 00:09:31.414 verify_backlog=512 00:09:31.414 verify_state_save=0 00:09:31.414 do_verify=1 00:09:31.414 verify=crc32c-intel 00:09:31.414 [job0] 00:09:31.414 filename=/dev/nvme0n1 00:09:31.414 Could not set queue depth (nvme0n1) 00:09:31.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.672 fio-3.35 00:09:31.672 Starting 1 thread 00:09:33.044 00:09:33.044 job0: (groupid=0, jobs=1): err= 0: pid=996550: Tue Nov 19 09:11:33 2024 00:09:33.044 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:09:33.044 slat (nsec): min=8892, max=23441, avg=22154.86, stdev=2973.40 00:09:33.044 clat (usec): min=40822, max=41128, avg=40970.58, stdev=87.76 00:09:33.044 lat (usec): min=40845, max=41152, avg=40992.74, stdev=88.54 00:09:33.044 clat percentiles (usec): 00:09:33.044 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:09:33.044 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.044 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.044 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:33.044 | 99.99th=[41157] 00:09:33.044 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:33.044 slat (usec): min=9, max=25945, avg=61.35, stdev=1146.18 00:09:33.044 clat (usec): min=118, max=410, avg=182.40, stdev=57.23 00:09:33.044 lat (usec): min=129, max=26305, avg=243.75, stdev=1155.44 00:09:33.044 clat percentiles (usec): 00:09:33.044 | 1.00th=[ 123], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 128], 00:09:33.044 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 151], 60.00th=[ 239], 00:09:33.044 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 245], 00:09:33.044 | 99.00th=[ 251], 99.50th=[ 359], 99.90th=[ 412], 99.95th=[ 412], 00:09:33.044 | 99.99th=[ 412] 00:09:33.044 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.044 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.044 lat (usec) : 250=94.38%, 500=1.50% 00:09:33.044 lat (msec) : 50=4.12% 00:09:33.044 cpu : usr=0.29%, sys=0.49%, ctx=538, majf=0, minf=1 00:09:33.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.044 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.044 00:09:33.044 Run status group 0 (all jobs): 00:09:33.044 READ: bw=85.4KiB/s (87.5kB/s), 85.4KiB/s-85.4KiB/s (87.5kB/s-87.5kB/s), io=88.0KiB (90.1kB), run=1030-1030msec 00:09:33.044 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:09:33.044 00:09:33.044 Disk stats (read/write): 00:09:33.044 nvme0n1: ios=44/512, merge=0/0, ticks=1723/94, in_queue=1817, util=98.40% 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.044 rmmod nvme_tcp 00:09:33.044 rmmod nvme_fabrics 00:09:33.044 rmmod nvme_keyring 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 995531 ']' 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 995531 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 995531 ']' 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 995531 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:33.044 09:11:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 995531 00:09:33.044 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:33.044 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:33.044 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 995531' 00:09:33.044 killing process with pid 995531 00:09:33.044 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 995531 00:09:33.044 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 995531 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.303 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.304 09:11:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.233 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.233 00:09:35.233 real 0m14.879s 00:09:35.233 user 0m33.078s 00:09:35.233 sys 0m5.175s 00:09:35.233 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.233 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.233 ************************************ 00:09:35.233 END TEST nvmf_nmic 00:09:35.233 ************************************ 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.492 ************************************ 00:09:35.492 START TEST nvmf_fio_target 00:09:35.492 ************************************ 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.492 * Looking for test storage... 00:09:35.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:35.492 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.493 --rc genhtml_branch_coverage=1 00:09:35.493 --rc genhtml_function_coverage=1 00:09:35.493 --rc genhtml_legend=1 00:09:35.493 --rc geninfo_all_blocks=1 00:09:35.493 --rc geninfo_unexecuted_blocks=1 00:09:35.493 00:09:35.493 ' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.493 --rc genhtml_branch_coverage=1 00:09:35.493 --rc genhtml_function_coverage=1 00:09:35.493 --rc genhtml_legend=1 00:09:35.493 --rc geninfo_all_blocks=1 00:09:35.493 --rc geninfo_unexecuted_blocks=1 00:09:35.493 00:09:35.493 ' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.493 --rc genhtml_branch_coverage=1 00:09:35.493 --rc genhtml_function_coverage=1 00:09:35.493 --rc genhtml_legend=1 00:09:35.493 --rc geninfo_all_blocks=1 00:09:35.493 --rc geninfo_unexecuted_blocks=1 00:09:35.493 00:09:35.493 ' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:35.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.493 --rc genhtml_branch_coverage=1 00:09:35.493 --rc genhtml_function_coverage=1 00:09:35.493 --rc genhtml_legend=1 00:09:35.493 --rc geninfo_all_blocks=1 00:09:35.493 --rc geninfo_unexecuted_blocks=1 00:09:35.493 00:09:35.493 ' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.493 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.752 09:11:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.320 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.320 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:09:42.320 00:09:42.320 --- 10.0.0.2 ping statistics --- 00:09:42.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.320 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:09:42.320 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:42.320 00:09:42.321 --- 10.0.0.1 ping statistics --- 00:09:42.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.321 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1000322 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1000322 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1000322 ']' 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.321 09:11:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.321 [2024-11-19 09:11:42.591518] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:09:42.321 [2024-11-19 09:11:42.591560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.321 [2024-11-19 09:11:42.670920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.321 [2024-11-19 09:11:42.711400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.321 [2024-11-19 09:11:42.711437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.321 [2024-11-19 09:11:42.711444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.321 [2024-11-19 09:11:42.711450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.321 [2024-11-19 09:11:42.711454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.321 [2024-11-19 09:11:42.712879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.321 [2024-11-19 09:11:42.712979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.321 [2024-11-19 09:11:42.713032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.321 [2024-11-19 09:11:42.713032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.579 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:42.836 [2024-11-19 09:11:43.645027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.836 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.094 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:43.094 09:11:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.094 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:43.094 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.352 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:43.352 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:43.610 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:43.610 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:43.868 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.125 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:44.125 09:11:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.383 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:44.383 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:44.383 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:44.383 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:44.641 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:44.898 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:44.898 09:11:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.156 09:11:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:45.156 09:11:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:45.414 09:11:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.414 [2024-11-19 09:11:46.377383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.414 09:11:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:45.672 09:11:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:45.930 09:11:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.304 09:11:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:47.304 09:11:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:47.304 09:11:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.304 09:11:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:47.305 09:11:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:47.305 09:11:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:49.203 09:11:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:49.203 [global] 00:09:49.203 thread=1 00:09:49.203 invalidate=1 00:09:49.203 rw=write 00:09:49.203 time_based=1 00:09:49.203 runtime=1 00:09:49.203 ioengine=libaio 00:09:49.203 direct=1 00:09:49.203 bs=4096 00:09:49.203 iodepth=1 00:09:49.203 norandommap=0 00:09:49.203 numjobs=1 00:09:49.203 00:09:49.203 verify_dump=1 00:09:49.203 verify_backlog=512 00:09:49.203 verify_state_save=0 00:09:49.203 do_verify=1 00:09:49.203 verify=crc32c-intel 00:09:49.203 [job0] 00:09:49.203 filename=/dev/nvme0n1 00:09:49.203 [job1] 00:09:49.203 filename=/dev/nvme0n2 00:09:49.203 [job2] 00:09:49.203 filename=/dev/nvme0n3 00:09:49.203 [job3] 00:09:49.203 filename=/dev/nvme0n4 00:09:49.203 Could not set queue depth (nvme0n1) 00:09:49.203 Could not set queue depth (nvme0n2) 00:09:49.203 Could not set queue depth (nvme0n3) 00:09:49.203 Could not set queue depth (nvme0n4) 00:09:49.460 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.460 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.460 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.460 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.460 fio-3.35 00:09:49.460 Starting 4 threads 00:09:50.868 00:09:50.868 job0: (groupid=0, jobs=1): err= 0: pid=1001889: Tue Nov 19 09:11:51 2024 00:09:50.868 read: IOPS=24, BW=96.9KiB/s (99.2kB/s)(100KiB/1032msec) 00:09:50.868 slat (nsec): min=8858, max=26380, avg=20892.68, stdev=5591.60 00:09:50.868 clat (usec): min=212, max=41080, avg=37694.62, stdev=11242.94 00:09:50.868 lat (usec): min=236, max=41103, avg=37715.51, stdev=11241.65 00:09:50.868 clat percentiles (usec): 00:09:50.868 | 1.00th=[ 212], 5.00th=[ 469], 10.00th=[40633], 20.00th=[40633], 00:09:50.868 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:50.868 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:50.868 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:50.868 | 99.99th=[41157] 00:09:50.868 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:09:50.868 slat (nsec): min=10242, max=41063, avg=11451.41, stdev=1736.80 00:09:50.868 clat (usec): min=127, max=288, avg=159.58, stdev=16.64 00:09:50.868 lat (usec): min=141, max=299, avg=171.03, stdev=17.01 00:09:50.868 clat percentiles (usec): 00:09:50.868 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:09:50.868 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:09:50.868 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 188], 00:09:50.868 | 99.00th=[ 219], 99.50th=[ 249], 99.90th=[ 289], 99.95th=[ 289], 00:09:50.868 | 99.99th=[ 289] 00:09:50.868 bw ( KiB/s): min= 4096, max= 4096, per=23.43%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.868 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.868 lat (usec) : 250=95.16%, 500=0.56% 00:09:50.868 lat (msec) : 50=4.28% 00:09:50.868 cpu : usr=0.39%, sys=0.48%, ctx=540, majf=0, minf=1 00:09:50.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.868 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.868 job1: (groupid=0, jobs=1): err= 0: pid=1001890: Tue Nov 19 09:11:51 2024 00:09:50.868 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:09:50.868 slat (nsec): min=9612, max=29115, avg=22547.77, stdev=3281.79 00:09:50.868 clat (usec): min=40408, max=42123, avg=41083.20, stdev=396.36 00:09:50.868 lat (usec): min=40418, max=42152, avg=41105.74, stdev=398.31 00:09:50.868 clat percentiles (usec): 00:09:50.868 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:50.868 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:50.868 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:50.868 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.868 | 99.99th=[42206] 00:09:50.868 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:50.868 slat (nsec): min=10655, max=74273, avg=15404.54, stdev=5616.58 00:09:50.868 clat (usec): min=124, max=391, avg=202.88, stdev=47.35 00:09:50.868 lat (usec): min=139, max=408, avg=218.28, stdev=48.93 00:09:50.868 clat percentiles (usec): 00:09:50.868 | 1.00th=[ 135], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 163], 00:09:50.868 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 194], 60.00th=[ 204], 00:09:50.868 | 70.00th=[ 212], 80.00th=[ 233], 90.00th=[ 293], 95.00th=[ 302], 00:09:50.868 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 392], 99.95th=[ 392], 00:09:50.868 | 99.99th=[ 392] 00:09:50.868 bw ( KiB/s): min= 4096, max= 4096, per=23.43%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.868 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.869 lat (usec) : 250=80.90%, 500=14.98% 00:09:50.869 lat (msec) : 50=4.12% 00:09:50.869 cpu : usr=0.49%, sys=0.98%, ctx=535, majf=0, minf=1 00:09:50.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.869 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.869 job2: (groupid=0, jobs=1): err= 0: pid=1001891: Tue Nov 19 09:11:51 2024 00:09:50.869 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:09:50.869 slat (nsec): min=9555, max=29235, avg=23006.82, stdev=3315.61 00:09:50.869 clat (usec): min=40609, max=41966, avg=40998.82, stdev=237.40 00:09:50.869 lat (usec): min=40618, max=41990, avg=41021.83, stdev=238.67 00:09:50.869 clat percentiles (usec): 00:09:50.869 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:50.869 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:50.869 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:50.869 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:50.869 | 99.99th=[42206] 00:09:50.869 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:50.869 slat (nsec): min=10320, max=41232, avg=12164.47, stdev=2357.33 00:09:50.869 clat (usec): min=145, max=304, avg=181.07, stdev=19.37 00:09:50.869 lat (usec): min=157, max=345, avg=193.24, stdev=20.16 00:09:50.869 clat percentiles (usec): 00:09:50.869 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:09:50.869 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:09:50.869 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:09:50.869 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 306], 99.95th=[ 306], 00:09:50.869 | 99.99th=[ 306] 00:09:50.869 bw ( KiB/s): min= 4096, max= 4096, per=23.43%, avg=4096.00, stdev= 0.00, samples=1 00:09:50.869 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:50.869 lat (usec) : 250=95.13%, 500=0.75% 00:09:50.869 lat (msec) : 50=4.12% 00:09:50.869 cpu : usr=0.50%, sys=0.80%, ctx=534, majf=0, minf=2 00:09:50.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.869 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.869 job3: (groupid=0, jobs=1): err= 0: pid=1001892: Tue Nov 19 09:11:51 2024 00:09:50.869 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:50.869 slat (nsec): min=7258, max=41097, avg=8384.80, stdev=1341.32 00:09:50.869 clat (usec): min=157, max=360, avg=190.67, stdev=14.16 00:09:50.869 lat (usec): min=168, max=368, avg=199.05, stdev=14.23 00:09:50.869 clat percentiles (usec): 00:09:50.869 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:09:50.869 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:09:50.869 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:09:50.869 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 277], 99.95th=[ 285], 00:09:50.869 | 99.99th=[ 363] 00:09:50.869 write: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:09:50.869 slat (nsec): min=10907, max=70017, avg=12450.75, stdev=2184.23 00:09:50.869 clat (usec): min=120, max=354, avg=147.06, stdev=20.44 00:09:50.869 lat (usec): min=131, max=385, avg=159.51, stdev=21.37 00:09:50.869 clat percentiles (usec): 00:09:50.869 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:09:50.869 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 145], 00:09:50.869 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 180], 95.00th=[ 190], 00:09:50.869 | 99.00th=[ 210], 99.50th=[ 221], 99.90th=[ 293], 99.95th=[ 314], 00:09:50.869 | 99.99th=[ 355] 00:09:50.869 bw ( KiB/s): min=12288, max=12288, per=70.29%, avg=12288.00, stdev= 0.00, samples=1 00:09:50.869 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:50.869 lat (usec) : 250=99.66%, 500=0.34% 00:09:50.869 cpu : usr=3.30%, sys=10.30%, ctx=5535, majf=0, minf=1 00:09:50.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:50.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.869 issued rwts: total=2560,2974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:50.869 00:09:50.869 Run status group 0 (all jobs): 00:09:50.869 READ: bw=9.95MiB/s (10.4MB/s), 86.4KiB/s-9.99MiB/s (88.5kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1032msec 00:09:50.869 WRITE: bw=17.1MiB/s (17.9MB/s), 1984KiB/s-11.6MiB/s (2032kB/s-12.2MB/s), io=17.6MiB (18.5MB), run=1001-1032msec 00:09:50.869 00:09:50.869 Disk stats (read/write): 00:09:50.869 nvme0n1: ios=43/512, merge=0/0, ticks=1600/82, in_queue=1682, util=85.77% 00:09:50.869 nvme0n2: ios=42/512, merge=0/0, ticks=1643/92, in_queue=1735, util=89.83% 00:09:50.869 nvme0n3: ios=75/512, merge=0/0, ticks=815/84, in_queue=899, util=94.69% 00:09:50.869 nvme0n4: ios=2176/2560, merge=0/0, ticks=1301/349, in_queue=1650, util=94.33% 00:09:50.869 09:11:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:50.869 [global] 00:09:50.869 thread=1 00:09:50.869 invalidate=1 00:09:50.869 rw=randwrite 00:09:50.869 time_based=1 00:09:50.869 runtime=1 00:09:50.869 ioengine=libaio 00:09:50.869 direct=1 00:09:50.869 bs=4096 00:09:50.869 iodepth=1 00:09:50.869 norandommap=0 00:09:50.869 numjobs=1 00:09:50.869 00:09:50.869 verify_dump=1 00:09:50.869 verify_backlog=512 00:09:50.869 verify_state_save=0 00:09:50.869 do_verify=1 00:09:50.869 verify=crc32c-intel 00:09:50.869 [job0] 00:09:50.869 filename=/dev/nvme0n1 00:09:50.869 [job1] 00:09:50.869 filename=/dev/nvme0n2 00:09:50.869 [job2] 00:09:50.869 filename=/dev/nvme0n3 00:09:50.869 [job3] 00:09:50.869 filename=/dev/nvme0n4 00:09:50.869 Could not set queue depth (nvme0n1) 00:09:50.869 Could not set queue depth (nvme0n2) 00:09:50.869 Could not set queue depth (nvme0n3) 00:09:50.869 Could not set queue depth (nvme0n4) 00:09:51.129 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.129 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.129 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.129 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.129 fio-3.35 00:09:51.129 Starting 4 threads 00:09:52.501 00:09:52.502 job0: (groupid=0, jobs=1): err= 0: pid=1002268: Tue Nov 19 09:11:53 2024 00:09:52.502 read: IOPS=2421, BW=9686KiB/s (9918kB/s)(9928KiB/1025msec) 00:09:52.502 slat (nsec): min=6289, max=27761, avg=7364.95, stdev=1053.18 00:09:52.502 clat (usec): min=151, max=41079, avg=229.07, stdev=1415.04 00:09:52.502 lat (usec): min=158, max=41101, avg=236.44, stdev=1415.44 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:09:52.502 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:09:52.502 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 210], 00:09:52.502 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[40633], 99.95th=[41157], 00:09:52.502 | 99.99th=[41157] 00:09:52.502 write: IOPS=2497, BW=9990KiB/s (10.2MB/s)(10.0MiB/1025msec); 0 zone resets 00:09:52.502 slat (nsec): min=9289, max=47465, avg=10338.69, stdev=1274.29 00:09:52.502 clat (usec): min=109, max=345, avg=156.44, stdev=37.35 00:09:52.502 lat (usec): min=119, max=355, avg=166.78, stdev=37.43 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 126], 00:09:52.502 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 161], 00:09:52.502 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 208], 95.00th=[ 235], 00:09:52.502 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 318], 00:09:52.502 | 99.99th=[ 347] 00:09:52.502 bw ( KiB/s): min= 8192, max=12288, per=64.06%, avg=10240.00, stdev=2896.31, samples=2 00:09:52.502 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:09:52.502 lat (usec) : 250=98.61%, 500=1.33% 00:09:52.502 lat (msec) : 50=0.06% 00:09:52.502 cpu : usr=1.76%, sys=4.98%, ctx=5044, majf=0, minf=1 00:09:52.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 issued rwts: total=2482,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.502 job1: (groupid=0, jobs=1): err= 0: pid=1002269: Tue Nov 19 09:11:53 2024 00:09:52.502 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:09:52.502 slat (nsec): min=9956, max=23841, avg=21804.82, stdev=2771.50 00:09:52.502 clat (usec): min=40830, max=41215, avg=40979.36, stdev=80.24 00:09:52.502 lat (usec): min=40854, max=41225, avg=41001.16, stdev=78.52 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:52.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:52.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:52.502 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:52.502 | 99.99th=[41157] 00:09:52.502 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:52.502 slat (nsec): min=9707, max=37770, avg=11075.63, stdev=2175.11 00:09:52.502 clat (usec): min=159, max=343, avg=195.25, stdev=15.70 00:09:52.502 lat (usec): min=169, max=353, avg=206.33, stdev=16.02 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:52.502 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:52.502 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 219], 00:09:52.502 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 343], 99.95th=[ 343], 00:09:52.502 | 99.99th=[ 343] 00:09:52.502 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.502 lat (usec) : 250=94.94%, 500=0.94% 00:09:52.502 lat (msec) : 50=4.12% 00:09:52.502 cpu : usr=0.59%, sys=0.69%, ctx=534, majf=0, minf=1 00:09:52.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.502 job2: (groupid=0, jobs=1): err= 0: pid=1002271: Tue Nov 19 09:11:53 2024 00:09:52.502 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:09:52.502 slat (nsec): min=9450, max=26763, avg=22695.00, stdev=3066.17 00:09:52.502 clat (usec): min=40856, max=42011, avg=41252.78, stdev=453.66 00:09:52.502 lat (usec): min=40880, max=42034, avg=41275.47, stdev=453.55 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:52.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:52.502 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:52.502 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:52.502 | 99.99th=[42206] 00:09:52.502 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:52.502 slat (nsec): min=9139, max=40542, avg=10101.14, stdev=1619.18 00:09:52.502 clat (usec): min=129, max=373, avg=199.29, stdev=23.61 00:09:52.502 lat (usec): min=138, max=382, avg=209.39, stdev=23.95 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:52.502 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:09:52.502 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 241], 95.00th=[ 243], 00:09:52.502 | 99.00th=[ 258], 99.50th=[ 326], 99.90th=[ 375], 99.95th=[ 375], 00:09:52.502 | 99.99th=[ 375] 00:09:52.502 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.502 lat (usec) : 250=94.01%, 500=1.87% 00:09:52.502 lat (msec) : 50=4.12% 00:09:52.502 cpu : usr=0.30%, sys=0.39%, ctx=534, majf=0, minf=2 00:09:52.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.502 job3: (groupid=0, jobs=1): err= 0: pid=1002275: Tue Nov 19 09:11:53 2024 00:09:52.502 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:09:52.502 slat (nsec): min=11472, max=26996, avg=22797.18, stdev=2687.91 00:09:52.502 clat (usec): min=40840, max=41186, avg=40975.37, stdev=90.14 00:09:52.502 lat (usec): min=40863, max=41210, avg=40998.17, stdev=89.30 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:52.502 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:52.502 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:52.502 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:52.502 | 99.99th=[41157] 00:09:52.502 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:52.502 slat (nsec): min=11245, max=39775, avg=12427.36, stdev=1731.53 00:09:52.502 clat (usec): min=150, max=323, avg=193.80, stdev=14.52 00:09:52.502 lat (usec): min=162, max=335, avg=206.23, stdev=14.72 00:09:52.502 clat percentiles (usec): 00:09:52.502 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:09:52.502 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:09:52.502 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 221], 00:09:52.502 | 99.00th=[ 231], 99.50th=[ 247], 99.90th=[ 322], 99.95th=[ 322], 00:09:52.502 | 99.99th=[ 322] 00:09:52.502 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.502 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.502 lat (usec) : 250=95.51%, 500=0.37% 00:09:52.502 lat (msec) : 50=4.12% 00:09:52.502 cpu : usr=0.59%, sys=0.40%, ctx=535, majf=0, minf=1 00:09:52.502 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.502 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.502 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.502 00:09:52.502 Run status group 0 (all jobs): 00:09:52.502 READ: bw=9943KiB/s (10.2MB/s), 86.5KiB/s-9686KiB/s (88.6kB/s-9918kB/s), io=9.95MiB (10.4MB), run=1010-1025msec 00:09:52.502 WRITE: bw=15.6MiB/s (16.4MB/s), 2014KiB/s-9990KiB/s (2062kB/s-10.2MB/s), io=16.0MiB (16.8MB), run=1010-1025msec 00:09:52.502 00:09:52.502 Disk stats (read/write): 00:09:52.502 nvme0n1: ios=2247/2560, merge=0/0, ticks=1247/380, in_queue=1627, util=85.77% 00:09:52.502 nvme0n2: ios=68/512, merge=0/0, ticks=807/95, in_queue=902, util=91.06% 00:09:52.502 nvme0n3: ios=75/512, merge=0/0, ticks=814/101, in_queue=915, util=94.69% 00:09:52.502 nvme0n4: ios=82/512, merge=0/0, ticks=1057/95, in_queue=1152, util=95.80% 00:09:52.502 09:11:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:52.502 [global] 00:09:52.502 thread=1 00:09:52.502 invalidate=1 00:09:52.502 rw=write 00:09:52.502 time_based=1 00:09:52.502 runtime=1 00:09:52.502 ioengine=libaio 00:09:52.502 direct=1 00:09:52.502 bs=4096 00:09:52.502 iodepth=128 00:09:52.502 norandommap=0 00:09:52.502 numjobs=1 00:09:52.502 00:09:52.502 verify_dump=1 00:09:52.502 verify_backlog=512 00:09:52.502 verify_state_save=0 00:09:52.502 do_verify=1 00:09:52.502 verify=crc32c-intel 00:09:52.502 [job0] 00:09:52.502 filename=/dev/nvme0n1 00:09:52.502 [job1] 00:09:52.502 filename=/dev/nvme0n2 00:09:52.502 [job2] 00:09:52.502 filename=/dev/nvme0n3 00:09:52.502 [job3] 00:09:52.502 filename=/dev/nvme0n4 00:09:52.502 Could not set queue depth (nvme0n1) 00:09:52.502 Could not set queue depth (nvme0n2) 00:09:52.502 Could not set queue depth (nvme0n3) 00:09:52.502 Could not set queue depth (nvme0n4) 00:09:52.503 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.503 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.503 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.503 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.503 fio-3.35 00:09:52.503 Starting 4 threads 00:09:53.877 00:09:53.877 job0: (groupid=0, jobs=1): err= 0: pid=1002648: Tue Nov 19 09:11:54 2024 00:09:53.877 read: IOPS=4808, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1009msec) 00:09:53.877 slat (nsec): min=1368, max=10865k, avg=104040.70, stdev=698193.85 00:09:53.877 clat (usec): min=3344, max=43015, avg=12295.29, stdev=4830.23 00:09:53.877 lat (usec): min=3355, max=43019, avg=12399.33, stdev=4883.62 00:09:53.877 clat percentiles (usec): 00:09:53.877 | 1.00th=[ 4752], 5.00th=[ 7963], 10.00th=[ 9241], 20.00th=[ 9372], 00:09:53.877 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10683], 60.00th=[11731], 00:09:53.877 | 70.00th=[12780], 80.00th=[14222], 90.00th=[16909], 95.00th=[21890], 00:09:53.877 | 99.00th=[32900], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:09:53.877 | 99.99th=[43254] 00:09:53.877 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:09:53.878 slat (usec): min=2, max=10600, avg=91.50, stdev=443.18 00:09:53.878 clat (usec): min=1352, max=43017, avg=13334.39, stdev=6293.78 00:09:53.878 lat (usec): min=1365, max=43027, avg=13425.89, stdev=6339.40 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 3523], 5.00th=[ 6128], 10.00th=[ 7177], 20.00th=[ 8979], 00:09:53.878 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[12387], 00:09:53.878 | 70.00th=[16188], 80.00th=[18482], 90.00th=[23200], 95.00th=[25822], 00:09:53.878 | 99.00th=[33162], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:09:53.878 | 99.99th=[43254] 00:09:53.878 bw ( KiB/s): min=16400, max=24560, per=30.02%, avg=20480.00, stdev=5769.99, samples=2 00:09:53.878 iops : min= 4100, max= 6140, avg=5120.00, stdev=1442.50, samples=2 00:09:53.878 lat (msec) : 2=0.02%, 4=0.99%, 10=34.71%, 20=53.02%, 50=11.26% 00:09:53.878 cpu : usr=3.08%, sys=6.35%, ctx=599, majf=0, minf=1 00:09:53.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:53.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.878 issued rwts: total=4852,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.878 job1: (groupid=0, jobs=1): err= 0: pid=1002649: Tue Nov 19 09:11:54 2024 00:09:53.878 read: IOPS=5062, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1003msec) 00:09:53.878 slat (nsec): min=1090, max=38641k, avg=94800.64, stdev=834888.63 00:09:53.878 clat (usec): min=1009, max=75052, avg=12943.26, stdev=8833.16 00:09:53.878 lat (usec): min=3375, max=75059, avg=13038.07, stdev=8851.40 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 4817], 5.00th=[ 5604], 10.00th=[ 7570], 20.00th=[ 9765], 00:09:53.878 | 30.00th=[10159], 40.00th=[10421], 50.00th=[11076], 60.00th=[11338], 00:09:53.878 | 70.00th=[11863], 80.00th=[13698], 90.00th=[19006], 95.00th=[31065], 00:09:53.878 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:09:53.878 | 99.99th=[74974] 00:09:53.878 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:53.878 slat (nsec): min=1954, max=21521k, avg=94800.86, stdev=778134.66 00:09:53.878 clat (usec): min=1465, max=44653, avg=11966.38, stdev=5694.22 00:09:53.878 lat (usec): min=1474, max=44682, avg=12061.18, stdev=5762.64 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 4015], 5.00th=[ 5211], 10.00th=[ 7111], 20.00th=[ 9634], 00:09:53.878 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10945], 00:09:53.878 | 70.00th=[11994], 80.00th=[12649], 90.00th=[19006], 95.00th=[27132], 00:09:53.878 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34866], 00:09:53.878 | 99.99th=[44827] 00:09:53.878 bw ( KiB/s): min=19480, max=21480, per=30.02%, avg=20480.00, stdev=1414.21, samples=2 00:09:53.878 iops : min= 4870, max= 5370, avg=5120.00, stdev=353.55, samples=2 00:09:53.878 lat (msec) : 2=0.07%, 4=0.74%, 10=30.88%, 20=59.99%, 50=7.71% 00:09:53.878 lat (msec) : 100=0.62% 00:09:53.878 cpu : usr=3.49%, sys=5.39%, ctx=287, majf=0, minf=1 00:09:53.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:53.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.878 issued rwts: total=5078,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.878 job2: (groupid=0, jobs=1): err= 0: pid=1002650: Tue Nov 19 09:11:54 2024 00:09:53.878 read: IOPS=2981, BW=11.6MiB/s (12.2MB/s)(11.8MiB/1009msec) 00:09:53.878 slat (nsec): min=1176, max=24236k, avg=174505.42, stdev=1249678.43 00:09:53.878 clat (usec): min=1540, max=86864, avg=20618.29, stdev=14324.93 00:09:53.878 lat (usec): min=5629, max=86889, avg=20792.79, stdev=14436.02 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[12387], 00:09:53.878 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14484], 60.00th=[15795], 00:09:53.878 | 70.00th=[17957], 80.00th=[27657], 90.00th=[45351], 95.00th=[55313], 00:09:53.878 | 99.00th=[64750], 99.50th=[64750], 99.90th=[78119], 99.95th=[80217], 00:09:53.878 | 99.99th=[86508] 00:09:53.878 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:09:53.878 slat (nsec): min=1968, max=13774k, avg=151361.79, stdev=810167.71 00:09:53.878 clat (usec): min=4659, max=92482, avg=21413.44, stdev=14849.64 00:09:53.878 lat (usec): min=5200, max=92490, avg=21564.80, stdev=14920.03 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 8291], 5.00th=[10028], 10.00th=[10945], 20.00th=[12780], 00:09:53.878 | 30.00th=[13042], 40.00th=[13566], 50.00th=[16188], 60.00th=[17433], 00:09:53.878 | 70.00th=[20055], 80.00th=[25035], 90.00th=[46400], 95.00th=[58459], 00:09:53.878 | 99.00th=[68682], 99.50th=[69731], 99.90th=[87557], 99.95th=[92799], 00:09:53.878 | 99.99th=[92799] 00:09:53.878 bw ( KiB/s): min= 8536, max=16040, per=18.01%, avg=12288.00, stdev=5306.13, samples=2 00:09:53.878 iops : min= 2134, max= 4010, avg=3072.00, stdev=1326.53, samples=2 00:09:53.878 lat (msec) : 2=0.02%, 10=6.32%, 20=65.20%, 50=20.39%, 100=8.08% 00:09:53.878 cpu : usr=1.88%, sys=2.98%, ctx=367, majf=0, minf=1 00:09:53.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:53.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.878 issued rwts: total=3008,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.878 job3: (groupid=0, jobs=1): err= 0: pid=1002651: Tue Nov 19 09:11:54 2024 00:09:53.878 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:09:53.878 slat (nsec): min=1121, max=13391k, avg=109884.58, stdev=816346.36 00:09:53.878 clat (usec): min=3005, max=41090, avg=14228.65, stdev=5761.99 00:09:53.878 lat (usec): min=3012, max=41099, avg=14338.54, stdev=5824.56 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 4228], 5.00th=[ 6521], 10.00th=[ 8094], 20.00th=[11207], 00:09:53.878 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:09:53.878 | 70.00th=[14222], 80.00th=[17433], 90.00th=[21365], 95.00th=[23200], 00:09:53.878 | 99.00th=[38536], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:09:53.878 | 99.99th=[41157] 00:09:53.878 write: IOPS=3873, BW=15.1MiB/s (15.9MB/s)(15.3MiB/1010msec); 0 zone resets 00:09:53.878 slat (nsec): min=1990, max=17257k, avg=132871.32, stdev=772705.88 00:09:53.878 clat (usec): min=350, max=72907, avg=19672.62, stdev=14577.91 00:09:53.878 lat (usec): min=386, max=72917, avg=19805.49, stdev=14678.25 00:09:53.878 clat percentiles (usec): 00:09:53.878 | 1.00th=[ 2147], 5.00th=[ 3752], 10.00th=[ 6325], 20.00th=[10028], 00:09:53.878 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12649], 60.00th=[14615], 00:09:53.878 | 70.00th=[21103], 80.00th=[33424], 90.00th=[41681], 95.00th=[50594], 00:09:53.878 | 99.00th=[61080], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:09:53.878 | 99.99th=[72877] 00:09:53.878 bw ( KiB/s): min=12720, max=17560, per=22.19%, avg=15140.00, stdev=3422.40, samples=2 00:09:53.878 iops : min= 3180, max= 4390, avg=3785.00, stdev=855.60, samples=2 00:09:53.878 lat (usec) : 500=0.05%, 750=0.05% 00:09:53.878 lat (msec) : 2=0.25%, 4=2.75%, 10=14.62%, 20=59.34%, 50=20.05% 00:09:53.878 lat (msec) : 100=2.88% 00:09:53.878 cpu : usr=3.17%, sys=4.56%, ctx=411, majf=0, minf=1 00:09:53.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:53.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.878 issued rwts: total=3584,3912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.878 00:09:53.878 Run status group 0 (all jobs): 00:09:53.878 READ: bw=63.9MiB/s (67.0MB/s), 11.6MiB/s-19.8MiB/s (12.2MB/s-20.7MB/s), io=64.5MiB (67.7MB), run=1003-1010msec 00:09:53.878 WRITE: bw=66.6MiB/s (69.8MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=67.3MiB (70.5MB), run=1003-1010msec 00:09:53.878 00:09:53.878 Disk stats (read/write): 00:09:53.878 nvme0n1: ios=4145/4391, merge=0/0, ticks=46353/51863, in_queue=98216, util=81.85% 00:09:53.878 nvme0n2: ios=3813/4096, merge=0/0, ticks=27930/24626, in_queue=52556, util=95.58% 00:09:53.878 nvme0n3: ios=2532/2560, merge=0/0, ticks=22761/23223, in_queue=45984, util=97.39% 00:09:53.878 nvme0n4: ios=2598/2735, merge=0/0, ticks=39478/58647, in_queue=98125, util=98.56% 00:09:53.878 09:11:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:53.878 [global] 00:09:53.878 thread=1 00:09:53.878 invalidate=1 00:09:53.878 rw=randwrite 00:09:53.878 time_based=1 00:09:53.878 runtime=1 00:09:53.878 ioengine=libaio 00:09:53.878 direct=1 00:09:53.878 bs=4096 00:09:53.878 iodepth=128 00:09:53.878 norandommap=0 00:09:53.878 numjobs=1 00:09:53.878 00:09:53.878 verify_dump=1 00:09:53.878 verify_backlog=512 00:09:53.878 verify_state_save=0 00:09:53.878 do_verify=1 00:09:53.878 verify=crc32c-intel 00:09:53.878 [job0] 00:09:53.878 filename=/dev/nvme0n1 00:09:53.878 [job1] 00:09:53.878 filename=/dev/nvme0n2 00:09:53.878 [job2] 00:09:53.878 filename=/dev/nvme0n3 00:09:53.878 [job3] 00:09:53.878 filename=/dev/nvme0n4 00:09:53.878 Could not set queue depth (nvme0n1) 00:09:53.878 Could not set queue depth (nvme0n2) 00:09:53.878 Could not set queue depth (nvme0n3) 00:09:53.878 Could not set queue depth (nvme0n4) 00:09:54.136 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.136 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.136 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.136 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:54.136 fio-3.35 00:09:54.136 Starting 4 threads 00:09:55.512 00:09:55.512 job0: (groupid=0, jobs=1): err= 0: pid=1003021: Tue Nov 19 09:11:56 2024 00:09:55.512 read: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec) 00:09:55.512 slat (nsec): min=1372, max=23659k, avg=170253.17, stdev=1297920.36 00:09:55.512 clat (usec): min=4550, max=50547, avg=19744.22, stdev=10198.27 00:09:55.512 lat (usec): min=4556, max=50554, avg=19914.48, stdev=10288.95 00:09:55.512 clat percentiles (usec): 00:09:55.512 | 1.00th=[ 5276], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:09:55.512 | 30.00th=[10814], 40.00th=[16188], 50.00th=[17957], 60.00th=[19268], 00:09:55.512 | 70.00th=[22414], 80.00th=[23987], 90.00th=[36963], 95.00th=[42730], 00:09:55.512 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:09:55.512 | 99.99th=[50594] 00:09:55.512 write: IOPS=2410, BW=9642KiB/s (9874kB/s)(9816KiB/1018msec); 0 zone resets 00:09:55.512 slat (usec): min=2, max=16308, avg=259.57, stdev=1291.26 00:09:55.512 clat (usec): min=1470, max=128782, avg=36157.38, stdev=28037.36 00:09:55.512 lat (usec): min=1481, max=128794, avg=36416.95, stdev=28204.14 00:09:55.512 clat percentiles (msec): 00:09:55.512 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 18], 00:09:55.512 | 30.00th=[ 18], 40.00th=[ 22], 50.00th=[ 27], 60.00th=[ 32], 00:09:55.512 | 70.00th=[ 41], 80.00th=[ 53], 90.00th=[ 81], 95.00th=[ 99], 00:09:55.512 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 129], 99.95th=[ 129], 00:09:55.512 | 99.99th=[ 129] 00:09:55.512 bw ( KiB/s): min= 6344, max=12272, per=15.61%, avg=9308.00, stdev=4191.73, samples=2 00:09:55.512 iops : min= 1586, max= 3068, avg=2327.00, stdev=1047.93, samples=2 00:09:55.512 lat (msec) : 2=0.07%, 4=0.58%, 10=9.04%, 20=38.83%, 50=39.89% 00:09:55.512 lat (msec) : 100=8.91%, 250=2.69% 00:09:55.512 cpu : usr=2.06%, sys=2.75%, ctx=269, majf=0, minf=1 00:09:55.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:55.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.512 issued rwts: total=2048,2454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.512 job1: (groupid=0, jobs=1): err= 0: pid=1003022: Tue Nov 19 09:11:56 2024 00:09:55.512 read: IOPS=5015, BW=19.6MiB/s (20.5MB/s)(20.5MiB/1046msec) 00:09:55.512 slat (nsec): min=1370, max=8572.2k, avg=76584.88, stdev=545301.55 00:09:55.512 clat (usec): min=3156, max=56838, avg=10422.88, stdev=7042.47 00:09:55.512 lat (usec): min=3161, max=56848, avg=10499.47, stdev=7056.90 00:09:55.512 clat percentiles (usec): 00:09:55.512 | 1.00th=[ 3785], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7832], 00:09:55.512 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:09:55.512 | 70.00th=[10159], 80.00th=[11469], 90.00th=[13566], 95.00th=[14877], 00:09:55.512 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56361], 99.95th=[56886], 00:09:55.512 | 99.99th=[56886] 00:09:55.512 write: IOPS=5384, BW=21.0MiB/s (22.1MB/s)(22.0MiB/1046msec); 0 zone resets 00:09:55.512 slat (usec): min=2, max=35746, avg=101.21, stdev=771.29 00:09:55.512 clat (usec): min=1530, max=108352, avg=13814.38, stdev=16718.49 00:09:55.512 lat (usec): min=1573, max=108371, avg=13915.59, stdev=16818.83 00:09:55.512 clat percentiles (msec): 00:09:55.512 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:09:55.512 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:09:55.512 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 38], 95.00th=[ 51], 00:09:55.512 | 99.00th=[ 95], 99.50th=[ 100], 99.90th=[ 109], 99.95th=[ 109], 00:09:55.512 | 99.99th=[ 109] 00:09:55.512 bw ( KiB/s): min=20464, max=24576, per=37.77%, avg=22520.00, stdev=2907.62, samples=2 00:09:55.512 iops : min= 5116, max= 6144, avg=5630.00, stdev=726.91, samples=2 00:09:55.512 lat (msec) : 2=0.01%, 4=2.47%, 10=72.61%, 20=17.14%, 50=4.07% 00:09:55.512 lat (msec) : 100=3.49%, 250=0.20% 00:09:55.512 cpu : usr=3.25%, sys=6.41%, ctx=675, majf=0, minf=1 00:09:55.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:55.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.512 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.512 job2: (groupid=0, jobs=1): err= 0: pid=1003023: Tue Nov 19 09:11:56 2024 00:09:55.512 read: IOPS=4023, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1018msec) 00:09:55.512 slat (nsec): min=1222, max=23510k, avg=110628.53, stdev=957319.62 00:09:55.512 clat (usec): min=2264, max=48550, avg=15127.76, stdev=7789.81 00:09:55.512 lat (usec): min=2271, max=48562, avg=15238.39, stdev=7877.34 00:09:55.512 clat percentiles (usec): 00:09:55.512 | 1.00th=[ 4752], 5.00th=[ 6194], 10.00th=[ 8586], 20.00th=[ 9896], 00:09:55.512 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11731], 60.00th=[13042], 00:09:55.512 | 70.00th=[18220], 80.00th=[20841], 90.00th=[27657], 95.00th=[31065], 00:09:55.512 | 99.00th=[36963], 99.50th=[38536], 99.90th=[41157], 99.95th=[42730], 00:09:55.512 | 99.99th=[48497] 00:09:55.512 write: IOPS=4240, BW=16.6MiB/s (17.4MB/s)(16.9MiB/1018msec); 0 zone resets 00:09:55.512 slat (usec): min=2, max=33703, avg=101.73, stdev=937.87 00:09:55.512 clat (usec): min=2298, max=67680, avg=15509.77, stdev=13043.61 00:09:55.512 lat (usec): min=2307, max=67685, avg=15611.50, stdev=13121.31 00:09:55.512 clat percentiles (usec): 00:09:55.512 | 1.00th=[ 3589], 5.00th=[ 4883], 10.00th=[ 6128], 20.00th=[ 7963], 00:09:55.512 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[11207], 00:09:55.512 | 70.00th=[12911], 80.00th=[21103], 90.00th=[38011], 95.00th=[47449], 00:09:55.512 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:09:55.512 | 99.99th=[67634] 00:09:55.512 bw ( KiB/s): min=15728, max=17792, per=28.11%, avg=16760.00, stdev=1459.47, samples=2 00:09:55.512 iops : min= 3932, max= 4448, avg=4190.00, stdev=364.87, samples=2 00:09:55.512 lat (msec) : 4=1.47%, 10=36.28%, 20=41.28%, 50=18.79%, 100=2.18% 00:09:55.512 cpu : usr=2.46%, sys=4.52%, ctx=269, majf=0, minf=1 00:09:55.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:55.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.512 issued rwts: total=4096,4317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.512 job3: (groupid=0, jobs=1): err= 0: pid=1003024: Tue Nov 19 09:11:56 2024 00:09:55.512 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec) 00:09:55.512 slat (nsec): min=1497, max=21730k, avg=158018.42, stdev=1124799.49 00:09:55.512 clat (usec): min=5242, max=81845, avg=16879.17, stdev=10727.23 00:09:55.512 lat (usec): min=5249, max=81855, avg=17037.19, stdev=10869.81 00:09:55.512 clat percentiles (usec): 00:09:55.512 | 1.00th=[ 7635], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10814], 00:09:55.512 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[13566], 00:09:55.512 | 70.00th=[15270], 80.00th=[23725], 90.00th=[26870], 95.00th=[36963], 00:09:55.512 | 99.00th=[62129], 99.50th=[64226], 99.90th=[82314], 99.95th=[82314], 00:09:55.512 | 99.99th=[82314] 00:09:55.512 write: IOPS=3133, BW=12.2MiB/s (12.8MB/s)(12.5MiB/1018msec); 0 zone resets 00:09:55.512 slat (usec): min=2, max=21259, avg=154.20, stdev=1087.63 00:09:55.512 clat (msec): min=2, max=119, avg=24.14, stdev=20.28 00:09:55.512 lat (msec): min=2, max=119, avg=24.29, stdev=20.38 00:09:55.512 clat percentiles (msec): 00:09:55.512 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:09:55.512 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 21], 00:09:55.512 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 49], 95.00th=[ 67], 00:09:55.512 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 120], 99.95th=[ 120], 00:09:55.512 | 99.99th=[ 120] 00:09:55.512 bw ( KiB/s): min= 9360, max=15216, per=20.61%, avg=12288.00, stdev=4140.82, samples=2 00:09:55.512 iops : min= 2340, max= 3804, avg=3072.00, stdev=1035.20, samples=2 00:09:55.512 lat (msec) : 4=0.32%, 10=7.25%, 20=58.99%, 50=27.75%, 100=4.55% 00:09:55.512 lat (msec) : 250=1.13% 00:09:55.512 cpu : usr=2.36%, sys=4.52%, ctx=268, majf=0, minf=1 00:09:55.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:55.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.512 issued rwts: total=3072,3190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.512 00:09:55.512 Run status group 0 (all jobs): 00:09:55.512 READ: bw=54.0MiB/s (56.6MB/s), 8047KiB/s-19.6MiB/s (8240kB/s-20.5MB/s), io=56.5MiB (59.2MB), run=1018-1046msec 00:09:55.512 WRITE: bw=58.2MiB/s (61.1MB/s), 9642KiB/s-21.0MiB/s (9874kB/s-22.1MB/s), io=60.9MiB (63.9MB), run=1018-1046msec 00:09:55.512 00:09:55.512 Disk stats (read/write): 00:09:55.512 nvme0n1: ios=1585/2048, merge=0/0, ticks=30219/70905, in_queue=101124, util=85.77% 00:09:55.512 nvme0n2: ios=4578/4608, merge=0/0, ticks=41959/58489, in_queue=100448, util=97.74% 00:09:55.512 nvme0n3: ios=3253/3584, merge=0/0, ticks=44957/40459, in_queue=85416, util=90.11% 00:09:55.512 nvme0n4: ios=2577/2775, merge=0/0, ticks=41380/59716, in_queue=101096, util=96.82% 00:09:55.512 09:11:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:55.512 09:11:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1003265 00:09:55.512 09:11:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:55.512 09:11:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:55.512 [global] 00:09:55.512 thread=1 00:09:55.512 invalidate=1 00:09:55.512 rw=read 00:09:55.512 time_based=1 00:09:55.512 runtime=10 00:09:55.512 ioengine=libaio 00:09:55.512 direct=1 00:09:55.512 bs=4096 00:09:55.512 iodepth=1 00:09:55.512 norandommap=1 00:09:55.512 numjobs=1 00:09:55.512 00:09:55.512 [job0] 00:09:55.512 filename=/dev/nvme0n1 00:09:55.512 [job1] 00:09:55.512 filename=/dev/nvme0n2 00:09:55.512 [job2] 00:09:55.512 filename=/dev/nvme0n3 00:09:55.512 [job3] 00:09:55.512 filename=/dev/nvme0n4 00:09:55.512 Could not set queue depth (nvme0n1) 00:09:55.512 Could not set queue depth (nvme0n2) 00:09:55.512 Could not set queue depth (nvme0n3) 00:09:55.512 Could not set queue depth (nvme0n4) 00:09:55.769 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.769 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.769 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.769 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:55.769 fio-3.35 00:09:55.769 Starting 4 threads 00:09:59.046 09:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:59.046 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=22863872, buflen=4096 00:09:59.046 fio: pid=1003414, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.046 09:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:59.046 09:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.046 09:11:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:59.046 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:09:59.046 fio: pid=1003413, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.046 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.046 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:59.046 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43700224, buflen=4096 00:09:59.046 fio: pid=1003411, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.304 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=32157696, buflen=4096 00:09:59.304 fio: pid=1003412, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:59.304 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.304 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:59.304 00:09:59.304 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003411: Tue Nov 19 09:12:00 2024 00:09:59.304 read: IOPS=3375, BW=13.2MiB/s (13.8MB/s)(41.7MiB/3161msec) 00:09:59.304 slat (usec): min=5, max=8824, avg= 8.10, stdev=85.37 00:09:59.304 clat (usec): min=141, max=45001, avg=284.50, stdev=1585.68 00:09:59.304 lat (usec): min=148, max=49968, avg=292.60, stdev=1609.78 00:09:59.304 clat percentiles (usec): 00:09:59.304 | 1.00th=[ 169], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:09:59.304 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 235], 00:09:59.304 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:09:59.304 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[41157], 99.95th=[41157], 00:09:59.304 | 99.99th=[41157] 00:09:59.304 bw ( KiB/s): min= 180, max=18960, per=49.21%, avg=14218.00, stdev=7053.38, samples=6 00:09:59.304 iops : min= 45, max= 4740, avg=3554.50, stdev=1763.34, samples=6 00:09:59.304 lat (usec) : 250=78.79%, 500=21.04% 00:09:59.304 lat (msec) : 4=0.01%, 50=0.15% 00:09:59.304 cpu : usr=0.82%, sys=3.07%, ctx=10674, majf=0, minf=1 00:09:59.304 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.304 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.304 issued rwts: total=10670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.304 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.304 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003412: Tue Nov 19 09:12:00 2024 00:09:59.304 read: IOPS=2345, BW=9383KiB/s (9608kB/s)(30.7MiB/3347msec) 00:09:59.304 slat (usec): min=7, max=15619, avg=15.58, stdev=312.82 00:09:59.305 clat (usec): min=161, max=41245, avg=405.97, stdev=2552.61 00:09:59.305 lat (usec): min=170, max=41253, avg=421.56, stdev=2571.80 00:09:59.305 clat percentiles (usec): 00:09:59.305 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 206], 20.00th=[ 227], 00:09:59.305 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:09:59.305 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 297], 00:09:59.305 | 99.00th=[ 363], 99.50th=[ 461], 99.90th=[41157], 99.95th=[41157], 00:09:59.305 | 99.99th=[41157] 00:09:59.305 bw ( KiB/s): min= 96, max=15392, per=30.15%, avg=8710.00, stdev=7353.79, samples=6 00:09:59.305 iops : min= 24, max= 3848, avg=2177.50, stdev=1838.45, samples=6 00:09:59.305 lat (usec) : 250=59.40%, 500=40.13%, 750=0.05% 00:09:59.305 lat (msec) : 2=0.01%, 50=0.39% 00:09:59.305 cpu : usr=1.34%, sys=3.86%, ctx=7858, majf=0, minf=1 00:09:59.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.305 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.305 issued rwts: total=7852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.305 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003413: Tue Nov 19 09:12:00 2024 00:09:59.305 read: IOPS=25, BW=99.1KiB/s (101kB/s)(292KiB/2946msec) 00:09:59.305 slat (nsec): min=8986, max=42271, avg=22834.09, stdev=3203.48 00:09:59.305 clat (usec): min=396, max=42036, avg=40041.33, stdev=6700.82 00:09:59.305 lat (usec): min=423, max=42059, avg=40064.14, stdev=6698.83 00:09:59.305 clat percentiles (usec): 00:09:59.305 | 1.00th=[ 396], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:59.305 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:59.305 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:59.305 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.305 | 99.99th=[42206] 00:09:59.305 bw ( KiB/s): min= 96, max= 104, per=0.34%, avg=99.20, stdev= 4.38, samples=5 00:09:59.305 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:59.305 lat (usec) : 500=1.35%, 750=1.35% 00:09:59.305 lat (msec) : 50=95.95% 00:09:59.305 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=2 00:09:59.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.305 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.305 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.305 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1003414: Tue Nov 19 09:12:00 2024 00:09:59.305 read: IOPS=2060, BW=8242KiB/s (8440kB/s)(21.8MiB/2709msec) 00:09:59.305 slat (nsec): min=7000, max=42670, avg=9382.92, stdev=2020.70 00:09:59.305 clat (usec): min=170, max=41075, avg=469.76, stdev=3025.89 00:09:59.305 lat (usec): min=199, max=41101, avg=479.14, stdev=3026.93 00:09:59.305 clat percentiles (usec): 00:09:59.305 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 231], 00:09:59.305 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 247], 00:09:59.305 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:09:59.305 | 99.00th=[ 355], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:59.305 | 99.99th=[41157] 00:09:59.305 bw ( KiB/s): min= 96, max=15568, per=26.95%, avg=7785.60, stdev=7712.62, samples=5 00:09:59.305 iops : min= 24, max= 3892, avg=1946.40, stdev=1928.16, samples=5 00:09:59.305 lat (usec) : 250=69.80%, 500=29.50%, 750=0.05% 00:09:59.305 lat (msec) : 2=0.05%, 4=0.02%, 50=0.56% 00:09:59.305 cpu : usr=1.51%, sys=3.32%, ctx=5584, majf=0, minf=1 00:09:59.305 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.305 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.305 issued rwts: total=5583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.305 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.305 00:09:59.305 Run status group 0 (all jobs): 00:09:59.305 READ: bw=28.2MiB/s (29.6MB/s), 99.1KiB/s-13.2MiB/s (101kB/s-13.8MB/s), io=94.4MiB (99.0MB), run=2709-3347msec 00:09:59.305 00:09:59.305 Disk stats (read/write): 00:09:59.305 nvme0n1: ios=10702/0, merge=0/0, ticks=3839/0, in_queue=3839, util=99.17% 00:09:59.305 nvme0n2: ios=6935/0, merge=0/0, ticks=3843/0, in_queue=3843, util=97.86% 00:09:59.305 nvme0n3: ios=71/0, merge=0/0, ticks=2843/0, in_queue=2843, util=96.52% 00:09:59.305 nvme0n4: ios=5254/0, merge=0/0, ticks=2495/0, in_queue=2495, util=96.44% 00:09:59.562 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.562 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:59.820 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:59.821 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:00.078 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.078 09:12:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:00.335 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:00.335 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:00.335 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:00.335 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1003265 00:10:00.335 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:00.336 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:00.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:00.593 nvmf hotplug test: fio failed as expected 00:10:00.593 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.851 rmmod nvme_tcp 00:10:00.851 rmmod nvme_fabrics 00:10:00.851 rmmod nvme_keyring 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1000322 ']' 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1000322 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1000322 ']' 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1000322 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1000322 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1000322' 00:10:00.851 killing process with pid 1000322 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1000322 00:10:00.851 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1000322 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.111 09:12:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.111 09:12:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.111 09:12:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.111 09:12:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.111 09:12:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.111 09:12:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.017 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.017 00:10:03.017 real 0m27.722s 00:10:03.017 user 1m50.246s 00:10:03.017 sys 0m8.606s 00:10:03.017 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.017 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.017 ************************************ 00:10:03.017 END TEST nvmf_fio_target 00:10:03.017 ************************************ 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.278 ************************************ 00:10:03.278 START TEST nvmf_bdevio 00:10:03.278 ************************************ 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:03.278 * Looking for test storage... 00:10:03.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.278 --rc genhtml_branch_coverage=1 00:10:03.278 --rc genhtml_function_coverage=1 00:10:03.278 --rc genhtml_legend=1 00:10:03.278 --rc geninfo_all_blocks=1 00:10:03.278 --rc geninfo_unexecuted_blocks=1 00:10:03.278 00:10:03.278 ' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.278 --rc genhtml_branch_coverage=1 00:10:03.278 --rc genhtml_function_coverage=1 00:10:03.278 --rc genhtml_legend=1 00:10:03.278 --rc geninfo_all_blocks=1 00:10:03.278 --rc geninfo_unexecuted_blocks=1 00:10:03.278 00:10:03.278 ' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.278 --rc genhtml_branch_coverage=1 00:10:03.278 --rc genhtml_function_coverage=1 00:10:03.278 --rc genhtml_legend=1 00:10:03.278 --rc geninfo_all_blocks=1 00:10:03.278 --rc geninfo_unexecuted_blocks=1 00:10:03.278 00:10:03.278 ' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:03.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.278 --rc genhtml_branch_coverage=1 00:10:03.278 --rc genhtml_function_coverage=1 00:10:03.278 --rc genhtml_legend=1 00:10:03.278 --rc geninfo_all_blocks=1 00:10:03.278 --rc geninfo_unexecuted_blocks=1 00:10:03.278 00:10:03.278 ' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.278 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.279 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:03.538 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.539 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.539 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.539 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:03.539 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:03.539 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:03.539 09:12:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:10.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:10.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.110 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:10.111 Found net devices under 0000:86:00.0: cvl_0_0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:10.111 Found net devices under 0000:86:00.1: cvl_0_1 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:10.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:10:10.111 00:10:10.111 --- 10.0.0.2 ping statistics --- 00:10:10.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.111 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:10:10.111 00:10:10.111 --- 10.0.0.1 ping statistics --- 00:10:10.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.111 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1008397 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1008397 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1008397 ']' 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 [2024-11-19 09:12:10.340823] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:10:10.111 [2024-11-19 09:12:10.340872] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.111 [2024-11-19 09:12:10.420960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.111 [2024-11-19 09:12:10.461685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.111 [2024-11-19 09:12:10.461728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.111 [2024-11-19 09:12:10.461736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.111 [2024-11-19 09:12:10.461743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.111 [2024-11-19 09:12:10.461748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.111 [2024-11-19 09:12:10.463312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.111 [2024-11-19 09:12:10.463400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.111 [2024-11-19 09:12:10.463511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.111 [2024-11-19 09:12:10.463511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 [2024-11-19 09:12:10.607923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 Malloc0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.111 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.112 [2024-11-19 09:12:10.667673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:10.112 { 00:10:10.112 "params": { 00:10:10.112 "name": "Nvme$subsystem", 00:10:10.112 "trtype": "$TEST_TRANSPORT", 00:10:10.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.112 "adrfam": "ipv4", 00:10:10.112 "trsvcid": "$NVMF_PORT", 00:10:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.112 "hdgst": ${hdgst:-false}, 00:10:10.112 "ddgst": ${ddgst:-false} 00:10:10.112 }, 00:10:10.112 "method": "bdev_nvme_attach_controller" 00:10:10.112 } 00:10:10.112 EOF 00:10:10.112 )") 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:10.112 09:12:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:10.112 "params": { 00:10:10.112 "name": "Nvme1", 00:10:10.112 "trtype": "tcp", 00:10:10.112 "traddr": "10.0.0.2", 00:10:10.112 "adrfam": "ipv4", 00:10:10.112 "trsvcid": "4420", 00:10:10.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.112 "hdgst": false, 00:10:10.112 "ddgst": false 00:10:10.112 }, 00:10:10.112 "method": "bdev_nvme_attach_controller" 00:10:10.112 }' 00:10:10.112 [2024-11-19 09:12:10.719873] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:10:10.112 [2024-11-19 09:12:10.719916] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008422 ] 00:10:10.112 [2024-11-19 09:12:10.793158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.112 [2024-11-19 09:12:10.837672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.112 [2024-11-19 09:12:10.837783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.112 [2024-11-19 09:12:10.837783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.112 I/O targets: 00:10:10.112 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:10.112 00:10:10.112 00:10:10.112 CUnit - A unit testing framework for C - Version 2.1-3 00:10:10.112 http://cunit.sourceforge.net/ 00:10:10.112 00:10:10.112 00:10:10.112 Suite: bdevio tests on: Nvme1n1 00:10:10.371 Test: blockdev write read block ...passed 00:10:10.371 Test: blockdev write zeroes read block ...passed 00:10:10.371 Test: blockdev write zeroes read no split ...passed 00:10:10.371 Test: blockdev write zeroes read split ...passed 00:10:10.371 Test: blockdev write zeroes read split partial ...passed 00:10:10.371 Test: blockdev reset ...[2024-11-19 09:12:11.269908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:10.371 [2024-11-19 09:12:11.269979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a4340 (9): Bad file descriptor 00:10:10.371 [2024-11-19 09:12:11.285591] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:10.371 passed 00:10:10.371 Test: blockdev write read 8 blocks ...passed 00:10:10.371 Test: blockdev write read size > 128k ...passed 00:10:10.371 Test: blockdev write read invalid size ...passed 00:10:10.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:10.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:10.371 Test: blockdev write read max offset ...passed 00:10:10.630 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:10.630 Test: blockdev writev readv 8 blocks ...passed 00:10:10.630 Test: blockdev writev readv 30 x 1block ...passed 00:10:10.630 Test: blockdev writev readv block ...passed 00:10:10.630 Test: blockdev writev readv size > 128k ...passed 00:10:10.630 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:10.630 Test: blockdev comparev and writev ...[2024-11-19 09:12:11.537735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.537764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.537778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.538021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.538032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.538043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.538050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.538302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.538312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.538323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.538330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.538567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.538577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.538588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:10.630 [2024-11-19 09:12:11.538595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:10.630 passed 00:10:10.630 Test: blockdev nvme passthru rw ...passed 00:10:10.630 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:12:11.622322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:10.630 [2024-11-19 09:12:11.622339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.622446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:10.630 [2024-11-19 09:12:11.622455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.622561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:10.630 [2024-11-19 09:12:11.622571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:10.630 [2024-11-19 09:12:11.622668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:10.630 [2024-11-19 09:12:11.622678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:10.630 passed 00:10:10.630 Test: blockdev nvme admin passthru ...passed 00:10:10.630 Test: blockdev copy ...passed 00:10:10.630 00:10:10.630 Run Summary: Type Total Ran Passed Failed Inactive 00:10:10.630 suites 1 1 n/a 0 0 00:10:10.630 tests 23 23 23 0 0 00:10:10.630 asserts 152 152 152 0 n/a 00:10:10.630 00:10:10.630 Elapsed time = 1.058 seconds 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.889 rmmod nvme_tcp 00:10:10.889 rmmod nvme_fabrics 00:10:10.889 rmmod nvme_keyring 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1008397 ']' 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1008397 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1008397 ']' 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1008397 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.889 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1008397 00:10:11.147 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:11.147 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:11.147 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1008397' 00:10:11.147 killing process with pid 1008397 00:10:11.147 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1008397 00:10:11.147 09:12:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1008397 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.147 09:12:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.686 00:10:13.686 real 0m10.078s 00:10:13.686 user 0m10.737s 00:10:13.686 sys 0m5.001s 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:13.686 ************************************ 00:10:13.686 END TEST nvmf_bdevio 00:10:13.686 ************************************ 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:13.686 00:10:13.686 real 4m37.852s 00:10:13.686 user 10m28.552s 00:10:13.686 sys 1m37.700s 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.686 ************************************ 00:10:13.686 END TEST nvmf_target_core 00:10:13.686 ************************************ 00:10:13.686 09:12:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:13.686 09:12:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:13.686 09:12:14 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.686 09:12:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.686 ************************************ 00:10:13.686 START TEST nvmf_target_extra 00:10:13.686 ************************************ 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:13.686 * Looking for test storage... 00:10:13.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:13.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.686 --rc genhtml_branch_coverage=1 00:10:13.686 --rc genhtml_function_coverage=1 00:10:13.686 --rc genhtml_legend=1 00:10:13.686 --rc geninfo_all_blocks=1 00:10:13.686 --rc geninfo_unexecuted_blocks=1 00:10:13.686 00:10:13.686 ' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:13.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.686 --rc genhtml_branch_coverage=1 00:10:13.686 --rc genhtml_function_coverage=1 00:10:13.686 --rc genhtml_legend=1 00:10:13.686 --rc geninfo_all_blocks=1 00:10:13.686 --rc geninfo_unexecuted_blocks=1 00:10:13.686 00:10:13.686 ' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:13.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.686 --rc genhtml_branch_coverage=1 00:10:13.686 --rc genhtml_function_coverage=1 00:10:13.686 --rc genhtml_legend=1 00:10:13.686 --rc geninfo_all_blocks=1 00:10:13.686 --rc geninfo_unexecuted_blocks=1 00:10:13.686 00:10:13.686 ' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:13.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.686 --rc genhtml_branch_coverage=1 00:10:13.686 --rc genhtml_function_coverage=1 00:10:13.686 --rc genhtml_legend=1 00:10:13.686 --rc geninfo_all_blocks=1 00:10:13.686 --rc geninfo_unexecuted_blocks=1 00:10:13.686 00:10:13.686 ' 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.686 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:13.687 ************************************ 00:10:13.687 START TEST nvmf_example 00:10:13.687 ************************************ 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:13.687 * Looking for test storage... 00:10:13.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.687 --rc genhtml_branch_coverage=1 00:10:13.687 --rc genhtml_function_coverage=1 00:10:13.687 --rc genhtml_legend=1 00:10:13.687 --rc geninfo_all_blocks=1 00:10:13.687 --rc geninfo_unexecuted_blocks=1 00:10:13.687 00:10:13.687 ' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.687 --rc genhtml_branch_coverage=1 00:10:13.687 --rc genhtml_function_coverage=1 00:10:13.687 --rc genhtml_legend=1 00:10:13.687 --rc geninfo_all_blocks=1 00:10:13.687 --rc geninfo_unexecuted_blocks=1 00:10:13.687 00:10:13.687 ' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.687 --rc genhtml_branch_coverage=1 00:10:13.687 --rc genhtml_function_coverage=1 00:10:13.687 --rc genhtml_legend=1 00:10:13.687 --rc geninfo_all_blocks=1 00:10:13.687 --rc geninfo_unexecuted_blocks=1 00:10:13.687 00:10:13.687 ' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.687 --rc genhtml_branch_coverage=1 00:10:13.687 --rc genhtml_function_coverage=1 00:10:13.687 --rc genhtml_legend=1 00:10:13.687 --rc geninfo_all_blocks=1 00:10:13.687 --rc geninfo_unexecuted_blocks=1 00:10:13.687 00:10:13.687 ' 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.687 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.688 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.947 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.948 09:12:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:20.519 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:20.519 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:20.519 Found net devices under 0000:86:00.0: cvl_0_0 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.519 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:20.520 Found net devices under 0000:86:00.1: cvl_0_1 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:10:20.520 00:10:20.520 --- 10.0.0.2 ping statistics --- 00:10:20.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.520 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:20.520 00:10:20.520 --- 10.0.0.1 ping statistics --- 00:10:20.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.520 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1012241 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1012241 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 1012241 ']' 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.520 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:20.780 09:12:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:32.993 Initializing NVMe Controllers 00:10:32.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:32.993 Initialization complete. Launching workers. 00:10:32.993 ======================================================== 00:10:32.993 Latency(us) 00:10:32.993 Device Information : IOPS MiB/s Average min max 00:10:32.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18016.87 70.38 3552.15 586.45 15423.61 00:10:32.993 ======================================================== 00:10:32.993 Total : 18016.87 70.38 3552.15 586.45 15423.61 00:10:32.993 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.993 rmmod nvme_tcp 00:10:32.993 rmmod nvme_fabrics 00:10:32.993 rmmod nvme_keyring 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1012241 ']' 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1012241 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 1012241 ']' 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 1012241 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1012241 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1012241' 00:10:32.993 killing process with pid 1012241 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 1012241 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 1012241 00:10:32.993 nvmf threads initialize successfully 00:10:32.993 bdev subsystem init successfully 00:10:32.993 created a nvmf target service 00:10:32.993 create targets's poll groups done 00:10:32.993 all subsystems of target started 00:10:32.993 nvmf target is running 00:10:32.993 all subsystems of target stopped 00:10:32.993 destroy targets's poll groups done 00:10:32.993 destroyed the nvmf target service 00:10:32.993 bdev subsystem finish successfully 00:10:32.993 nvmf threads destroy successfully 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.993 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.994 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.994 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.994 09:12:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 00:10:33.562 real 0m19.877s 00:10:33.562 user 0m46.187s 00:10:33.562 sys 0m6.106s 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 ************************************ 00:10:33.562 END TEST nvmf_example 00:10:33.562 ************************************ 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.562 ************************************ 00:10:33.562 START TEST nvmf_filesystem 00:10:33.562 ************************************ 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:33.562 * Looking for test storage... 00:10:33.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.562 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.826 --rc genhtml_branch_coverage=1 00:10:33.826 --rc genhtml_function_coverage=1 00:10:33.826 --rc genhtml_legend=1 00:10:33.826 --rc geninfo_all_blocks=1 00:10:33.826 --rc geninfo_unexecuted_blocks=1 00:10:33.826 00:10:33.826 ' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.826 --rc genhtml_branch_coverage=1 00:10:33.826 --rc genhtml_function_coverage=1 00:10:33.826 --rc genhtml_legend=1 00:10:33.826 --rc geninfo_all_blocks=1 00:10:33.826 --rc geninfo_unexecuted_blocks=1 00:10:33.826 00:10:33.826 ' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.826 --rc genhtml_branch_coverage=1 00:10:33.826 --rc genhtml_function_coverage=1 00:10:33.826 --rc genhtml_legend=1 00:10:33.826 --rc geninfo_all_blocks=1 00:10:33.826 --rc geninfo_unexecuted_blocks=1 00:10:33.826 00:10:33.826 ' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:33.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.826 --rc genhtml_branch_coverage=1 00:10:33.826 --rc genhtml_function_coverage=1 00:10:33.826 --rc genhtml_legend=1 00:10:33.826 --rc geninfo_all_blocks=1 00:10:33.826 --rc geninfo_unexecuted_blocks=1 00:10:33.826 00:10:33.826 ' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:33.826 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:33.827 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:33.827 #define SPDK_CONFIG_H 00:10:33.827 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:33.827 #define SPDK_CONFIG_APPS 1 00:10:33.827 #define SPDK_CONFIG_ARCH native 00:10:33.827 #undef SPDK_CONFIG_ASAN 00:10:33.827 #undef SPDK_CONFIG_AVAHI 00:10:33.827 #undef SPDK_CONFIG_CET 00:10:33.827 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:33.827 #define SPDK_CONFIG_COVERAGE 1 00:10:33.827 #define SPDK_CONFIG_CROSS_PREFIX 00:10:33.827 #undef SPDK_CONFIG_CRYPTO 00:10:33.827 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:33.827 #undef SPDK_CONFIG_CUSTOMOCF 00:10:33.827 #undef SPDK_CONFIG_DAOS 00:10:33.827 #define SPDK_CONFIG_DAOS_DIR 00:10:33.827 #define SPDK_CONFIG_DEBUG 1 00:10:33.827 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:33.827 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:33.827 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:33.827 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:33.827 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:33.827 #undef SPDK_CONFIG_DPDK_UADK 00:10:33.827 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:33.827 #define SPDK_CONFIG_EXAMPLES 1 00:10:33.827 #undef SPDK_CONFIG_FC 00:10:33.827 #define SPDK_CONFIG_FC_PATH 00:10:33.828 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:33.828 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:33.828 #define SPDK_CONFIG_FSDEV 1 00:10:33.828 #undef SPDK_CONFIG_FUSE 00:10:33.828 #undef SPDK_CONFIG_FUZZER 00:10:33.828 #define SPDK_CONFIG_FUZZER_LIB 00:10:33.828 #undef SPDK_CONFIG_GOLANG 00:10:33.828 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:33.828 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:33.828 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:33.828 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:33.828 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:33.828 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:33.828 #undef SPDK_CONFIG_HAVE_LZ4 00:10:33.828 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:33.828 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:33.828 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:33.828 #define SPDK_CONFIG_IDXD 1 00:10:33.828 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:33.828 #undef SPDK_CONFIG_IPSEC_MB 00:10:33.828 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:33.828 #define SPDK_CONFIG_ISAL 1 00:10:33.828 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:33.828 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:33.828 #define SPDK_CONFIG_LIBDIR 00:10:33.828 #undef SPDK_CONFIG_LTO 00:10:33.828 #define SPDK_CONFIG_MAX_LCORES 128 00:10:33.828 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:33.828 #define SPDK_CONFIG_NVME_CUSE 1 00:10:33.828 #undef SPDK_CONFIG_OCF 00:10:33.828 #define SPDK_CONFIG_OCF_PATH 00:10:33.828 #define SPDK_CONFIG_OPENSSL_PATH 00:10:33.828 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:33.828 #define SPDK_CONFIG_PGO_DIR 00:10:33.828 #undef SPDK_CONFIG_PGO_USE 00:10:33.828 #define SPDK_CONFIG_PREFIX /usr/local 00:10:33.828 #undef SPDK_CONFIG_RAID5F 00:10:33.828 #undef SPDK_CONFIG_RBD 00:10:33.828 #define SPDK_CONFIG_RDMA 1 00:10:33.828 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:33.828 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:33.828 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:33.828 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:33.828 #define SPDK_CONFIG_SHARED 1 00:10:33.828 #undef SPDK_CONFIG_SMA 00:10:33.828 #define SPDK_CONFIG_TESTS 1 00:10:33.828 #undef SPDK_CONFIG_TSAN 00:10:33.828 #define SPDK_CONFIG_UBLK 1 00:10:33.828 #define SPDK_CONFIG_UBSAN 1 00:10:33.828 #undef SPDK_CONFIG_UNIT_TESTS 00:10:33.828 #undef SPDK_CONFIG_URING 00:10:33.828 #define SPDK_CONFIG_URING_PATH 00:10:33.828 #undef SPDK_CONFIG_URING_ZNS 00:10:33.828 #undef SPDK_CONFIG_USDT 00:10:33.828 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:33.828 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:33.828 #define SPDK_CONFIG_VFIO_USER 1 00:10:33.828 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:33.828 #define SPDK_CONFIG_VHOST 1 00:10:33.828 #define SPDK_CONFIG_VIRTIO 1 00:10:33.828 #undef SPDK_CONFIG_VTUNE 00:10:33.828 #define SPDK_CONFIG_VTUNE_DIR 00:10:33.828 #define SPDK_CONFIG_WERROR 1 00:10:33.828 #define SPDK_CONFIG_WPDK_DIR 00:10:33.828 #undef SPDK_CONFIG_XNVME 00:10:33.828 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:33.828 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:33.829 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1014646 ]] 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1014646 00:10:33.830 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.oLDQBs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.oLDQBs/tests/target /tmp/spdk.oLDQBs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189198401536 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963961344 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6765559808 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971949568 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981980672 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981657088 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981980672 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=323584 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:33.831 * Looking for test storage... 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189198401536 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8980152320 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:33.831 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:33.832 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:33.832 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:33.832 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:33.832 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.092 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.093 --rc genhtml_branch_coverage=1 00:10:34.093 --rc genhtml_function_coverage=1 00:10:34.093 --rc genhtml_legend=1 00:10:34.093 --rc geninfo_all_blocks=1 00:10:34.093 --rc geninfo_unexecuted_blocks=1 00:10:34.093 00:10:34.093 ' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.093 --rc genhtml_branch_coverage=1 00:10:34.093 --rc genhtml_function_coverage=1 00:10:34.093 --rc genhtml_legend=1 00:10:34.093 --rc geninfo_all_blocks=1 00:10:34.093 --rc geninfo_unexecuted_blocks=1 00:10:34.093 00:10:34.093 ' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.093 --rc genhtml_branch_coverage=1 00:10:34.093 --rc genhtml_function_coverage=1 00:10:34.093 --rc genhtml_legend=1 00:10:34.093 --rc geninfo_all_blocks=1 00:10:34.093 --rc geninfo_unexecuted_blocks=1 00:10:34.093 00:10:34.093 ' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.093 --rc genhtml_branch_coverage=1 00:10:34.093 --rc genhtml_function_coverage=1 00:10:34.093 --rc genhtml_legend=1 00:10:34.093 --rc geninfo_all_blocks=1 00:10:34.093 --rc geninfo_unexecuted_blocks=1 00:10:34.093 00:10:34.093 ' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.093 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.094 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.094 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.094 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.094 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.671 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.671 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.671 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.671 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.672 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:10:40.672 00:10:40.672 --- 10.0.0.2 ping statistics --- 00:10:40.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.672 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:10:40.672 00:10:40.672 --- 10.0.0.1 ping statistics --- 00:10:40.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.672 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.672 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.672 ************************************ 00:10:40.672 START TEST nvmf_filesystem_no_in_capsule 00:10:40.672 ************************************ 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1017903 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1017903 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1017903 ']' 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.672 [2024-11-19 09:12:41.086371] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:10:40.672 [2024-11-19 09:12:41.086414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.672 [2024-11-19 09:12:41.163530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.672 [2024-11-19 09:12:41.204473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.672 [2024-11-19 09:12:41.204511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.672 [2024-11-19 09:12:41.204518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.672 [2024-11-19 09:12:41.204525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.672 [2024-11-19 09:12:41.204530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.672 [2024-11-19 09:12:41.205900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.672 [2024-11-19 09:12:41.206014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.672 [2024-11-19 09:12:41.206050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.672 [2024-11-19 09:12:41.206051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.672 [2024-11-19 09:12:41.350563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.672 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.672 Malloc1 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.673 [2024-11-19 09:12:41.492250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:40.673 { 00:10:40.673 "name": "Malloc1", 00:10:40.673 "aliases": [ 00:10:40.673 "661efeeb-410a-40a4-b6f6-e76da670608d" 00:10:40.673 ], 00:10:40.673 "product_name": "Malloc disk", 00:10:40.673 "block_size": 512, 00:10:40.673 "num_blocks": 1048576, 00:10:40.673 "uuid": "661efeeb-410a-40a4-b6f6-e76da670608d", 00:10:40.673 "assigned_rate_limits": { 00:10:40.673 "rw_ios_per_sec": 0, 00:10:40.673 "rw_mbytes_per_sec": 0, 00:10:40.673 "r_mbytes_per_sec": 0, 00:10:40.673 "w_mbytes_per_sec": 0 00:10:40.673 }, 00:10:40.673 "claimed": true, 00:10:40.673 "claim_type": "exclusive_write", 00:10:40.673 "zoned": false, 00:10:40.673 "supported_io_types": { 00:10:40.673 "read": true, 00:10:40.673 "write": true, 00:10:40.673 "unmap": true, 00:10:40.673 "flush": true, 00:10:40.673 "reset": true, 00:10:40.673 "nvme_admin": false, 00:10:40.673 "nvme_io": false, 00:10:40.673 "nvme_io_md": false, 00:10:40.673 "write_zeroes": true, 00:10:40.673 "zcopy": true, 00:10:40.673 "get_zone_info": false, 00:10:40.673 "zone_management": false, 00:10:40.673 "zone_append": false, 00:10:40.673 "compare": false, 00:10:40.673 "compare_and_write": false, 00:10:40.673 "abort": true, 00:10:40.673 "seek_hole": false, 00:10:40.673 "seek_data": false, 00:10:40.673 "copy": true, 00:10:40.673 "nvme_iov_md": false 00:10:40.673 }, 00:10:40.673 "memory_domains": [ 00:10:40.673 { 00:10:40.673 "dma_device_id": "system", 00:10:40.673 "dma_device_type": 1 00:10:40.673 }, 00:10:40.673 { 00:10:40.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.673 "dma_device_type": 2 00:10:40.673 } 00:10:40.673 ], 00:10:40.673 "driver_specific": {} 00:10:40.673 } 00:10:40.673 ]' 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:40.673 09:12:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.052 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.052 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:42.052 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.052 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:42.052 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:43.957 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:44.216 09:12:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:44.785 09:12:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.724 ************************************ 00:10:45.724 START TEST filesystem_ext4 00:10:45.724 ************************************ 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:45.724 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:45.724 mke2fs 1.47.0 (5-Feb-2023) 00:10:45.724 Discarding device blocks: 0/522240 done 00:10:45.983 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:45.983 Filesystem UUID: e5780da5-e511-41ee-bfa2-2c274b184bfa 00:10:45.983 Superblock backups stored on blocks: 00:10:45.983 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:45.983 00:10:45.983 Allocating group tables: 0/64 done 00:10:45.983 Writing inode tables: 0/64 done 00:10:45.983 Creating journal (8192 blocks): done 00:10:45.983 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.983 00:10:45.983 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:45.983 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.557 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1017903 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.558 00:10:52.558 real 0m6.006s 00:10:52.558 user 0m0.023s 00:10:52.558 sys 0m0.073s 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.558 ************************************ 00:10:52.558 END TEST filesystem_ext4 00:10:52.558 ************************************ 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.558 ************************************ 00:10:52.558 START TEST filesystem_btrfs 00:10:52.558 ************************************ 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:52.558 09:12:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.558 btrfs-progs v6.8.1 00:10:52.558 See https://btrfs.readthedocs.io for more information. 00:10:52.558 00:10:52.558 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.558 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.558 this does not affect your deployments: 00:10:52.558 - DUP for metadata (-m dup) 00:10:52.558 - enabled no-holes (-O no-holes) 00:10:52.558 - enabled free-space-tree (-R free-space-tree) 00:10:52.558 00:10:52.558 Label: (null) 00:10:52.558 UUID: 676b4b3f-4fc0-46b8-889c-91f9aecbd2fe 00:10:52.558 Node size: 16384 00:10:52.558 Sector size: 4096 (CPU page size: 4096) 00:10:52.558 Filesystem size: 510.00MiB 00:10:52.558 Block group profiles: 00:10:52.558 Data: single 8.00MiB 00:10:52.558 Metadata: DUP 32.00MiB 00:10:52.558 System: DUP 8.00MiB 00:10:52.558 SSD detected: yes 00:10:52.558 Zoned device: no 00:10:52.558 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.558 Checksum: crc32c 00:10:52.558 Number of devices: 1 00:10:52.558 Devices: 00:10:52.558 ID SIZE PATH 00:10:52.558 1 510.00MiB /dev/nvme0n1p1 00:10:52.558 00:10:52.558 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:52.558 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1017903 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.831 00:10:52.831 real 0m1.146s 00:10:52.831 user 0m0.027s 00:10:52.831 sys 0m0.111s 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.831 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.831 ************************************ 00:10:52.831 END TEST filesystem_btrfs 00:10:52.831 ************************************ 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.126 ************************************ 00:10:53.126 START TEST filesystem_xfs 00:10:53.126 ************************************ 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:53.126 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:53.126 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:53.126 = sectsz=512 attr=2, projid32bit=1 00:10:53.126 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:53.126 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:53.126 data = bsize=4096 blocks=130560, imaxpct=25 00:10:53.126 = sunit=0 swidth=0 blks 00:10:53.126 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:53.126 log =internal log bsize=4096 blocks=16384, version=2 00:10:53.126 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:53.126 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:54.123 Discarding blocks...Done. 00:10:54.123 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:54.123 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.079 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1017903 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.338 00:10:56.338 real 0m3.294s 00:10:56.338 user 0m0.021s 00:10:56.338 sys 0m0.076s 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.338 ************************************ 00:10:56.338 END TEST filesystem_xfs 00:10:56.338 ************************************ 00:10:56.338 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:56.596 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1017903 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1017903 ']' 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1017903 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1017903 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1017903' 00:10:56.855 killing process with pid 1017903 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 1017903 00:10:56.855 09:12:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 1017903 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.116 00:10:57.116 real 0m17.022s 00:10:57.116 user 1m6.941s 00:10:57.116 sys 0m1.407s 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.116 ************************************ 00:10:57.116 END TEST nvmf_filesystem_no_in_capsule 00:10:57.116 ************************************ 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:57.116 ************************************ 00:10:57.116 START TEST nvmf_filesystem_in_capsule 00:10:57.116 ************************************ 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1020905 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1020905 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1020905 ']' 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:57.116 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 [2024-11-19 09:12:58.183518] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:10:57.376 [2024-11-19 09:12:58.183564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.376 [2024-11-19 09:12:58.262433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.376 [2024-11-19 09:12:58.301841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.376 [2024-11-19 09:12:58.301879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.376 [2024-11-19 09:12:58.301886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.376 [2024-11-19 09:12:58.301892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.376 [2024-11-19 09:12:58.301897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.376 [2024-11-19 09:12:58.303339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.376 [2024-11-19 09:12:58.303445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.376 [2024-11-19 09:12:58.303553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.376 [2024-11-19 09:12:58.303554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.376 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.376 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:57.376 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.376 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:57.376 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.635 [2024-11-19 09:12:58.448867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.635 Malloc1 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.635 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.635 [2024-11-19 09:12:58.592256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:57.636 { 00:10:57.636 "name": "Malloc1", 00:10:57.636 "aliases": [ 00:10:57.636 "cf046f86-8ece-42c8-a146-c1f9151ce1ea" 00:10:57.636 ], 00:10:57.636 "product_name": "Malloc disk", 00:10:57.636 "block_size": 512, 00:10:57.636 "num_blocks": 1048576, 00:10:57.636 "uuid": "cf046f86-8ece-42c8-a146-c1f9151ce1ea", 00:10:57.636 "assigned_rate_limits": { 00:10:57.636 "rw_ios_per_sec": 0, 00:10:57.636 "rw_mbytes_per_sec": 0, 00:10:57.636 "r_mbytes_per_sec": 0, 00:10:57.636 "w_mbytes_per_sec": 0 00:10:57.636 }, 00:10:57.636 "claimed": true, 00:10:57.636 "claim_type": "exclusive_write", 00:10:57.636 "zoned": false, 00:10:57.636 "supported_io_types": { 00:10:57.636 "read": true, 00:10:57.636 "write": true, 00:10:57.636 "unmap": true, 00:10:57.636 "flush": true, 00:10:57.636 "reset": true, 00:10:57.636 "nvme_admin": false, 00:10:57.636 "nvme_io": false, 00:10:57.636 "nvme_io_md": false, 00:10:57.636 "write_zeroes": true, 00:10:57.636 "zcopy": true, 00:10:57.636 "get_zone_info": false, 00:10:57.636 "zone_management": false, 00:10:57.636 "zone_append": false, 00:10:57.636 "compare": false, 00:10:57.636 "compare_and_write": false, 00:10:57.636 "abort": true, 00:10:57.636 "seek_hole": false, 00:10:57.636 "seek_data": false, 00:10:57.636 "copy": true, 00:10:57.636 "nvme_iov_md": false 00:10:57.636 }, 00:10:57.636 "memory_domains": [ 00:10:57.636 { 00:10:57.636 "dma_device_id": "system", 00:10:57.636 "dma_device_type": 1 00:10:57.636 }, 00:10:57.636 { 00:10:57.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.636 "dma_device_type": 2 00:10:57.636 } 00:10:57.636 ], 00:10:57.636 "driver_specific": {} 00:10:57.636 } 00:10:57.636 ]' 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:57.636 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:57.895 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:57.895 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:57.895 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:57.895 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:57.895 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.831 09:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.831 09:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:58.831 09:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.831 09:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:58.831 09:12:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:01.364 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:01.365 09:13:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:01.365 09:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:01.624 09:13:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:02.559 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:02.559 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:02.559 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:02.559 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.560 ************************************ 00:11:02.560 START TEST filesystem_in_capsule_ext4 00:11:02.560 ************************************ 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:02.560 09:13:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:02.560 mke2fs 1.47.0 (5-Feb-2023) 00:11:02.560 Discarding device blocks: 0/522240 done 00:11:02.560 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:02.560 Filesystem UUID: fcb82ad9-65c3-4363-8d29-10066abba9ca 00:11:02.560 Superblock backups stored on blocks: 00:11:02.560 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:02.560 00:11:02.560 Allocating group tables: 0/64 done 00:11:02.560 Writing inode tables: 0/64 done 00:11:04.459 Creating journal (8192 blocks): done 00:11:04.459 Writing superblocks and filesystem accounting information: 0/64 done 00:11:04.459 00:11:04.459 09:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:04.459 09:13:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1020905 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.723 00:11:09.723 real 0m7.148s 00:11:09.723 user 0m0.035s 00:11:09.723 sys 0m0.063s 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.723 ************************************ 00:11:09.723 END TEST filesystem_in_capsule_ext4 00:11:09.723 ************************************ 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.723 ************************************ 00:11:09.723 START TEST filesystem_in_capsule_btrfs 00:11:09.723 ************************************ 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:09.723 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.981 btrfs-progs v6.8.1 00:11:09.981 See https://btrfs.readthedocs.io for more information. 00:11:09.981 00:11:09.981 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.981 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.981 this does not affect your deployments: 00:11:09.981 - DUP for metadata (-m dup) 00:11:09.981 - enabled no-holes (-O no-holes) 00:11:09.981 - enabled free-space-tree (-R free-space-tree) 00:11:09.981 00:11:09.981 Label: (null) 00:11:09.981 UUID: cfeb94a5-7022-4f1c-ad66-858629fe76e6 00:11:09.981 Node size: 16384 00:11:09.981 Sector size: 4096 (CPU page size: 4096) 00:11:09.981 Filesystem size: 510.00MiB 00:11:09.981 Block group profiles: 00:11:09.981 Data: single 8.00MiB 00:11:09.981 Metadata: DUP 32.00MiB 00:11:09.981 System: DUP 8.00MiB 00:11:09.981 SSD detected: yes 00:11:09.981 Zoned device: no 00:11:09.981 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.981 Checksum: crc32c 00:11:09.981 Number of devices: 1 00:11:09.981 Devices: 00:11:09.981 ID SIZE PATH 00:11:09.981 1 510.00MiB /dev/nvme0n1p1 00:11:09.982 00:11:09.982 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:09.982 09:13:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1020905 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.548 00:11:10.548 real 0m0.915s 00:11:10.548 user 0m0.022s 00:11:10.548 sys 0m0.117s 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.548 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.548 ************************************ 00:11:10.548 END TEST filesystem_in_capsule_btrfs 00:11:10.548 ************************************ 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.806 ************************************ 00:11:10.806 START TEST filesystem_in_capsule_xfs 00:11:10.806 ************************************ 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:10.806 09:13:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.806 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.806 = sectsz=512 attr=2, projid32bit=1 00:11:10.806 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.806 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.806 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.806 = sunit=0 swidth=0 blks 00:11:10.806 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.806 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.806 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.806 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.740 Discarding blocks...Done. 00:11:11.740 09:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:11.740 09:13:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1020905 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.270 00:11:14.270 real 0m3.529s 00:11:14.270 user 0m0.025s 00:11:14.270 sys 0m0.073s 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.270 ************************************ 00:11:14.270 END TEST filesystem_in_capsule_xfs 00:11:14.270 ************************************ 00:11:14.270 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:14.529 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:14.529 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1020905 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1020905 ']' 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1020905 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1020905 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1020905' 00:11:14.787 killing process with pid 1020905 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 1020905 00:11:14.787 09:13:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 1020905 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:15.047 00:11:15.047 real 0m17.883s 00:11:15.047 user 1m10.360s 00:11:15.047 sys 0m1.440s 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.047 ************************************ 00:11:15.047 END TEST nvmf_filesystem_in_capsule 00:11:15.047 ************************************ 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.047 rmmod nvme_tcp 00:11:15.047 rmmod nvme_fabrics 00:11:15.047 rmmod nvme_keyring 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.047 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.307 09:13:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.210 00:11:17.210 real 0m43.680s 00:11:17.210 user 2m19.379s 00:11:17.210 sys 0m7.578s 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.210 ************************************ 00:11:17.210 END TEST nvmf_filesystem 00:11:17.210 ************************************ 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.210 ************************************ 00:11:17.210 START TEST nvmf_target_discovery 00:11:17.210 ************************************ 00:11:17.210 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:17.470 * Looking for test storage... 00:11:17.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:17.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.470 --rc genhtml_branch_coverage=1 00:11:17.470 --rc genhtml_function_coverage=1 00:11:17.470 --rc genhtml_legend=1 00:11:17.470 --rc geninfo_all_blocks=1 00:11:17.470 --rc geninfo_unexecuted_blocks=1 00:11:17.470 00:11:17.470 ' 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:17.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.470 --rc genhtml_branch_coverage=1 00:11:17.470 --rc genhtml_function_coverage=1 00:11:17.470 --rc genhtml_legend=1 00:11:17.470 --rc geninfo_all_blocks=1 00:11:17.470 --rc geninfo_unexecuted_blocks=1 00:11:17.470 00:11:17.470 ' 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:17.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.470 --rc genhtml_branch_coverage=1 00:11:17.470 --rc genhtml_function_coverage=1 00:11:17.470 --rc genhtml_legend=1 00:11:17.470 --rc geninfo_all_blocks=1 00:11:17.470 --rc geninfo_unexecuted_blocks=1 00:11:17.470 00:11:17.470 ' 00:11:17.470 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:17.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.471 --rc genhtml_branch_coverage=1 00:11:17.471 --rc genhtml_function_coverage=1 00:11:17.471 --rc genhtml_legend=1 00:11:17.471 --rc geninfo_all_blocks=1 00:11:17.471 --rc geninfo_unexecuted_blocks=1 00:11:17.471 00:11:17.471 ' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.471 09:13:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.045 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:24.046 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:24.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:24.046 Found net devices under 0000:86:00.0: cvl_0_0 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:24.046 Found net devices under 0000:86:00.1: cvl_0_1 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:11:24.046 00:11:24.046 --- 10.0.0.2 ping statistics --- 00:11:24.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.046 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:11:24.046 00:11:24.046 --- 10.0.0.1 ping statistics --- 00:11:24.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.046 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.046 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1027492 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1027492 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 1027492 ']' 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 [2024-11-19 09:13:24.503679] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:11:24.047 [2024-11-19 09:13:24.503726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.047 [2024-11-19 09:13:24.584598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.047 [2024-11-19 09:13:24.625620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.047 [2024-11-19 09:13:24.625660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.047 [2024-11-19 09:13:24.625667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.047 [2024-11-19 09:13:24.625673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.047 [2024-11-19 09:13:24.625679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.047 [2024-11-19 09:13:24.627157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.047 [2024-11-19 09:13:24.627271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.047 [2024-11-19 09:13:24.627358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.047 [2024-11-19 09:13:24.627359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 [2024-11-19 09:13:24.773003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 Null1 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 [2024-11-19 09:13:24.822541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 Null2 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 Null3 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 Null4 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.047 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.048 09:13:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:24.306 00:11:24.306 Discovery Log Number of Records 5, Generation counter 6 00:11:24.306 =====Discovery Log Entry 0====== 00:11:24.306 trtype: tcp 00:11:24.306 adrfam: ipv4 00:11:24.306 subtype: current discovery subsystem 00:11:24.306 treq: not required 00:11:24.307 portid: 0 00:11:24.307 trsvcid: 4420 00:11:24.307 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:24.307 traddr: 10.0.0.2 00:11:24.307 eflags: explicit discovery connections, duplicate discovery information 00:11:24.307 sectype: none 00:11:24.307 =====Discovery Log Entry 1====== 00:11:24.307 trtype: tcp 00:11:24.307 adrfam: ipv4 00:11:24.307 subtype: nvme subsystem 00:11:24.307 treq: not required 00:11:24.307 portid: 0 00:11:24.307 trsvcid: 4420 00:11:24.307 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:24.307 traddr: 10.0.0.2 00:11:24.307 eflags: none 00:11:24.307 sectype: none 00:11:24.307 =====Discovery Log Entry 2====== 00:11:24.307 trtype: tcp 00:11:24.307 adrfam: ipv4 00:11:24.307 subtype: nvme subsystem 00:11:24.307 treq: not required 00:11:24.307 portid: 0 00:11:24.307 trsvcid: 4420 00:11:24.307 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:24.307 traddr: 10.0.0.2 00:11:24.307 eflags: none 00:11:24.307 sectype: none 00:11:24.307 =====Discovery Log Entry 3====== 00:11:24.307 trtype: tcp 00:11:24.307 adrfam: ipv4 00:11:24.307 subtype: nvme subsystem 00:11:24.307 treq: not required 00:11:24.307 portid: 0 00:11:24.307 trsvcid: 4420 00:11:24.307 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:24.307 traddr: 10.0.0.2 00:11:24.307 eflags: none 00:11:24.307 sectype: none 00:11:24.307 =====Discovery Log Entry 4====== 00:11:24.307 trtype: tcp 00:11:24.307 adrfam: ipv4 00:11:24.307 subtype: nvme subsystem 00:11:24.307 treq: not required 00:11:24.307 portid: 0 00:11:24.307 trsvcid: 4420 00:11:24.307 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:24.307 traddr: 10.0.0.2 00:11:24.307 eflags: none 00:11:24.307 sectype: none 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:24.307 Perform nvmf subsystem discovery via RPC 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 [ 00:11:24.307 { 00:11:24.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:24.307 "subtype": "Discovery", 00:11:24.307 "listen_addresses": [ 00:11:24.307 { 00:11:24.307 "trtype": "TCP", 00:11:24.307 "adrfam": "IPv4", 00:11:24.307 "traddr": "10.0.0.2", 00:11:24.307 "trsvcid": "4420" 00:11:24.307 } 00:11:24.307 ], 00:11:24.307 "allow_any_host": true, 00:11:24.307 "hosts": [] 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.307 "subtype": "NVMe", 00:11:24.307 "listen_addresses": [ 00:11:24.307 { 00:11:24.307 "trtype": "TCP", 00:11:24.307 "adrfam": "IPv4", 00:11:24.307 "traddr": "10.0.0.2", 00:11:24.307 "trsvcid": "4420" 00:11:24.307 } 00:11:24.307 ], 00:11:24.307 "allow_any_host": true, 00:11:24.307 "hosts": [], 00:11:24.307 "serial_number": "SPDK00000000000001", 00:11:24.307 "model_number": "SPDK bdev Controller", 00:11:24.307 "max_namespaces": 32, 00:11:24.307 "min_cntlid": 1, 00:11:24.307 "max_cntlid": 65519, 00:11:24.307 "namespaces": [ 00:11:24.307 { 00:11:24.307 "nsid": 1, 00:11:24.307 "bdev_name": "Null1", 00:11:24.307 "name": "Null1", 00:11:24.307 "nguid": "EDAABA909C52441EB37D206F49C61509", 00:11:24.307 "uuid": "edaaba90-9c52-441e-b37d-206f49c61509" 00:11:24.307 } 00:11:24.307 ] 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:24.307 "subtype": "NVMe", 00:11:24.307 "listen_addresses": [ 00:11:24.307 { 00:11:24.307 "trtype": "TCP", 00:11:24.307 "adrfam": "IPv4", 00:11:24.307 "traddr": "10.0.0.2", 00:11:24.307 "trsvcid": "4420" 00:11:24.307 } 00:11:24.307 ], 00:11:24.307 "allow_any_host": true, 00:11:24.307 "hosts": [], 00:11:24.307 "serial_number": "SPDK00000000000002", 00:11:24.307 "model_number": "SPDK bdev Controller", 00:11:24.307 "max_namespaces": 32, 00:11:24.307 "min_cntlid": 1, 00:11:24.307 "max_cntlid": 65519, 00:11:24.307 "namespaces": [ 00:11:24.307 { 00:11:24.307 "nsid": 1, 00:11:24.307 "bdev_name": "Null2", 00:11:24.307 "name": "Null2", 00:11:24.307 "nguid": "32EA96CD334E4CE2B54C77ABA954F702", 00:11:24.307 "uuid": "32ea96cd-334e-4ce2-b54c-77aba954f702" 00:11:24.307 } 00:11:24.307 ] 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:24.307 "subtype": "NVMe", 00:11:24.307 "listen_addresses": [ 00:11:24.307 { 00:11:24.307 "trtype": "TCP", 00:11:24.307 "adrfam": "IPv4", 00:11:24.307 "traddr": "10.0.0.2", 00:11:24.307 "trsvcid": "4420" 00:11:24.307 } 00:11:24.307 ], 00:11:24.307 "allow_any_host": true, 00:11:24.307 "hosts": [], 00:11:24.307 "serial_number": "SPDK00000000000003", 00:11:24.307 "model_number": "SPDK bdev Controller", 00:11:24.307 "max_namespaces": 32, 00:11:24.307 "min_cntlid": 1, 00:11:24.307 "max_cntlid": 65519, 00:11:24.307 "namespaces": [ 00:11:24.307 { 00:11:24.307 "nsid": 1, 00:11:24.307 "bdev_name": "Null3", 00:11:24.307 "name": "Null3", 00:11:24.307 "nguid": "FC57A4EFCB714B99AB5AEE2923FB3884", 00:11:24.307 "uuid": "fc57a4ef-cb71-4b99-ab5a-ee2923fb3884" 00:11:24.307 } 00:11:24.307 ] 00:11:24.307 }, 00:11:24.307 { 00:11:24.307 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:24.307 "subtype": "NVMe", 00:11:24.307 "listen_addresses": [ 00:11:24.307 { 00:11:24.307 "trtype": "TCP", 00:11:24.307 "adrfam": "IPv4", 00:11:24.307 "traddr": "10.0.0.2", 00:11:24.307 "trsvcid": "4420" 00:11:24.307 } 00:11:24.307 ], 00:11:24.307 "allow_any_host": true, 00:11:24.307 "hosts": [], 00:11:24.307 "serial_number": "SPDK00000000000004", 00:11:24.307 "model_number": "SPDK bdev Controller", 00:11:24.307 "max_namespaces": 32, 00:11:24.307 "min_cntlid": 1, 00:11:24.307 "max_cntlid": 65519, 00:11:24.307 "namespaces": [ 00:11:24.307 { 00:11:24.307 "nsid": 1, 00:11:24.307 "bdev_name": "Null4", 00:11:24.307 "name": "Null4", 00:11:24.307 "nguid": "5C030790D3BF4E46A8A470224407BF47", 00:11:24.307 "uuid": "5c030790-d3bf-4e46-a8a4-70224407bf47" 00:11:24.307 } 00:11:24.307 ] 00:11:24.307 } 00:11:24.307 ] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.307 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.308 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.308 rmmod nvme_tcp 00:11:24.566 rmmod nvme_fabrics 00:11:24.566 rmmod nvme_keyring 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1027492 ']' 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1027492 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 1027492 ']' 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 1027492 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1027492 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1027492' 00:11:24.566 killing process with pid 1027492 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 1027492 00:11:24.566 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 1027492 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.826 09:13:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:26.731 00:11:26.731 real 0m9.448s 00:11:26.731 user 0m5.994s 00:11:26.731 sys 0m4.753s 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.731 ************************************ 00:11:26.731 END TEST nvmf_target_discovery 00:11:26.731 ************************************ 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.731 ************************************ 00:11:26.731 START TEST nvmf_referrals 00:11:26.731 ************************************ 00:11:26.731 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:26.990 * Looking for test storage... 00:11:26.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:26.990 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:26.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.991 --rc genhtml_branch_coverage=1 00:11:26.991 --rc genhtml_function_coverage=1 00:11:26.991 --rc genhtml_legend=1 00:11:26.991 --rc geninfo_all_blocks=1 00:11:26.991 --rc geninfo_unexecuted_blocks=1 00:11:26.991 00:11:26.991 ' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:26.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.991 --rc genhtml_branch_coverage=1 00:11:26.991 --rc genhtml_function_coverage=1 00:11:26.991 --rc genhtml_legend=1 00:11:26.991 --rc geninfo_all_blocks=1 00:11:26.991 --rc geninfo_unexecuted_blocks=1 00:11:26.991 00:11:26.991 ' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:26.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.991 --rc genhtml_branch_coverage=1 00:11:26.991 --rc genhtml_function_coverage=1 00:11:26.991 --rc genhtml_legend=1 00:11:26.991 --rc geninfo_all_blocks=1 00:11:26.991 --rc geninfo_unexecuted_blocks=1 00:11:26.991 00:11:26.991 ' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:26.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.991 --rc genhtml_branch_coverage=1 00:11:26.991 --rc genhtml_function_coverage=1 00:11:26.991 --rc genhtml_legend=1 00:11:26.991 --rc geninfo_all_blocks=1 00:11:26.991 --rc geninfo_unexecuted_blocks=1 00:11:26.991 00:11:26.991 ' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:26.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:26.991 09:13:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.566 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.567 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.567 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:11:33.567 00:11:33.567 --- 10.0.0.2 ping statistics --- 00:11:33.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.567 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:11:33.567 00:11:33.567 --- 10.0.0.1 ping statistics --- 00:11:33.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.567 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1031208 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1031208 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 1031208 ']' 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:33.567 09:13:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.567 [2024-11-19 09:13:34.025140] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:11:33.567 [2024-11-19 09:13:34.025185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.567 [2024-11-19 09:13:34.105048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.567 [2024-11-19 09:13:34.149750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.567 [2024-11-19 09:13:34.149784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.567 [2024-11-19 09:13:34.149791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.567 [2024-11-19 09:13:34.149798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.567 [2024-11-19 09:13:34.149804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.567 [2024-11-19 09:13:34.151171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.567 [2024-11-19 09:13:34.151200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.567 [2024-11-19 09:13:34.151301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.567 [2024-11-19 09:13:34.151301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.567 [2024-11-19 09:13:34.296859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.567 [2024-11-19 09:13:34.310340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -ah 00:11:33.567 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 -ah 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 -ah 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.568 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.826 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery -ah 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 -ah 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:34.084 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.085 09:13:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.343 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.602 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:34.895 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:35.152 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:35.152 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:35.152 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:35.152 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:35.152 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:35.152 09:13:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:35.152 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.408 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.409 rmmod nvme_tcp 00:11:35.409 rmmod nvme_fabrics 00:11:35.666 rmmod nvme_keyring 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1031208 ']' 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1031208 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 1031208 ']' 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 1031208 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1031208 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1031208' 00:11:35.666 killing process with pid 1031208 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 1031208 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 1031208 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.666 09:13:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.202 00:11:38.202 real 0m11.009s 00:11:38.202 user 0m12.752s 00:11:38.202 sys 0m5.259s 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:38.202 ************************************ 00:11:38.202 END TEST nvmf_referrals 00:11:38.202 ************************************ 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.202 ************************************ 00:11:38.202 START TEST nvmf_connect_disconnect 00:11:38.202 ************************************ 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:38.202 * Looking for test storage... 00:11:38.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:38.202 09:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.202 --rc genhtml_branch_coverage=1 00:11:38.202 --rc genhtml_function_coverage=1 00:11:38.202 --rc genhtml_legend=1 00:11:38.202 --rc geninfo_all_blocks=1 00:11:38.202 --rc geninfo_unexecuted_blocks=1 00:11:38.202 00:11:38.202 ' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.202 --rc genhtml_branch_coverage=1 00:11:38.202 --rc genhtml_function_coverage=1 00:11:38.202 --rc genhtml_legend=1 00:11:38.202 --rc geninfo_all_blocks=1 00:11:38.202 --rc geninfo_unexecuted_blocks=1 00:11:38.202 00:11:38.202 ' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.202 --rc genhtml_branch_coverage=1 00:11:38.202 --rc genhtml_function_coverage=1 00:11:38.202 --rc genhtml_legend=1 00:11:38.202 --rc geninfo_all_blocks=1 00:11:38.202 --rc geninfo_unexecuted_blocks=1 00:11:38.202 00:11:38.202 ' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.202 --rc genhtml_branch_coverage=1 00:11:38.202 --rc genhtml_function_coverage=1 00:11:38.202 --rc genhtml_legend=1 00:11:38.202 --rc geninfo_all_blocks=1 00:11:38.202 --rc geninfo_unexecuted_blocks=1 00:11:38.202 00:11:38.202 ' 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.202 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.203 09:13:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:44.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:44.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:44.773 Found net devices under 0000:86:00.0: cvl_0_0 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:44.773 Found net devices under 0000:86:00.1: cvl_0_1 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.773 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:11:44.774 00:11:44.774 --- 10.0.0.2 ping statistics --- 00:11:44.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.774 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:11:44.774 00:11:44.774 --- 10.0.0.1 ping statistics --- 00:11:44.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.774 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.774 09:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1035289 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1035289 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 1035289 ']' 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 [2024-11-19 09:13:45.054847] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:11:44.774 [2024-11-19 09:13:45.054891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.774 [2024-11-19 09:13:45.135303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.774 [2024-11-19 09:13:45.176637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.774 [2024-11-19 09:13:45.176674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.774 [2024-11-19 09:13:45.176681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.774 [2024-11-19 09:13:45.176687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.774 [2024-11-19 09:13:45.176691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.774 [2024-11-19 09:13:45.178206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.774 [2024-11-19 09:13:45.178312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.774 [2024-11-19 09:13:45.178421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.774 [2024-11-19 09:13:45.178422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 [2024-11-19 09:13:45.327786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 [2024-11-19 09:13:45.401926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:44.774 09:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:48.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.267 rmmod nvme_tcp 00:12:01.267 rmmod nvme_fabrics 00:12:01.267 rmmod nvme_keyring 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1035289 ']' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1035289 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1035289 ']' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 1035289 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1035289 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1035289' 00:12:01.267 killing process with pid 1035289 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 1035289 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 1035289 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.267 09:14:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.172 00:12:03.172 real 0m25.199s 00:12:03.172 user 1m8.459s 00:12:03.172 sys 0m5.773s 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.172 ************************************ 00:12:03.172 END TEST nvmf_connect_disconnect 00:12:03.172 ************************************ 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.172 ************************************ 00:12:03.172 START TEST nvmf_multitarget 00:12:03.172 ************************************ 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:03.172 * Looking for test storage... 00:12:03.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:03.172 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.432 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.433 --rc genhtml_branch_coverage=1 00:12:03.433 --rc genhtml_function_coverage=1 00:12:03.433 --rc genhtml_legend=1 00:12:03.433 --rc geninfo_all_blocks=1 00:12:03.433 --rc geninfo_unexecuted_blocks=1 00:12:03.433 00:12:03.433 ' 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.433 --rc genhtml_branch_coverage=1 00:12:03.433 --rc genhtml_function_coverage=1 00:12:03.433 --rc genhtml_legend=1 00:12:03.433 --rc geninfo_all_blocks=1 00:12:03.433 --rc geninfo_unexecuted_blocks=1 00:12:03.433 00:12:03.433 ' 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.433 --rc genhtml_branch_coverage=1 00:12:03.433 --rc genhtml_function_coverage=1 00:12:03.433 --rc genhtml_legend=1 00:12:03.433 --rc geninfo_all_blocks=1 00:12:03.433 --rc geninfo_unexecuted_blocks=1 00:12:03.433 00:12:03.433 ' 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:03.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.433 --rc genhtml_branch_coverage=1 00:12:03.433 --rc genhtml_function_coverage=1 00:12:03.433 --rc genhtml_legend=1 00:12:03.433 --rc geninfo_all_blocks=1 00:12:03.433 --rc geninfo_unexecuted_blocks=1 00:12:03.433 00:12:03.433 ' 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.433 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.434 09:14:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:10.003 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:10.003 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:10.003 Found net devices under 0000:86:00.0: cvl_0_0 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:10.003 Found net devices under 0000:86:00.1: cvl_0_1 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.003 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.004 09:14:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:12:10.004 00:12:10.004 --- 10.0.0.2 ping statistics --- 00:12:10.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.004 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:12:10.004 00:12:10.004 --- 10.0.0.1 ping statistics --- 00:12:10.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.004 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1041689 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1041689 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 1041689 ']' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.004 [2024-11-19 09:14:10.305032] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:12:10.004 [2024-11-19 09:14:10.305079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.004 [2024-11-19 09:14:10.368683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.004 [2024-11-19 09:14:10.413010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.004 [2024-11-19 09:14:10.413043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.004 [2024-11-19 09:14:10.413050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.004 [2024-11-19 09:14:10.413057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.004 [2024-11-19 09:14:10.413062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.004 [2024-11-19 09:14:10.414644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.004 [2024-11-19 09:14:10.414752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.004 [2024-11-19 09:14:10.415966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.004 [2024-11-19 09:14:10.415969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:10.004 "nvmf_tgt_1" 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:10.004 "nvmf_tgt_2" 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:10.004 09:14:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:10.262 true 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:10.262 true 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.262 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.262 rmmod nvme_tcp 00:12:10.521 rmmod nvme_fabrics 00:12:10.521 rmmod nvme_keyring 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1041689 ']' 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1041689 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 1041689 ']' 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 1041689 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1041689 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1041689' 00:12:10.521 killing process with pid 1041689 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 1041689 00:12:10.521 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 1041689 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.780 09:14:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.688 00:12:12.688 real 0m9.536s 00:12:12.688 user 0m7.248s 00:12:12.688 sys 0m4.817s 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:12.688 ************************************ 00:12:12.688 END TEST nvmf_multitarget 00:12:12.688 ************************************ 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.688 ************************************ 00:12:12.688 START TEST nvmf_rpc 00:12:12.688 ************************************ 00:12:12.688 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:12.951 * Looking for test storage... 00:12:12.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.951 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:12.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.951 --rc genhtml_branch_coverage=1 00:12:12.952 --rc genhtml_function_coverage=1 00:12:12.952 --rc genhtml_legend=1 00:12:12.952 --rc geninfo_all_blocks=1 00:12:12.952 --rc geninfo_unexecuted_blocks=1 00:12:12.952 00:12:12.952 ' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:12.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.952 --rc genhtml_branch_coverage=1 00:12:12.952 --rc genhtml_function_coverage=1 00:12:12.952 --rc genhtml_legend=1 00:12:12.952 --rc geninfo_all_blocks=1 00:12:12.952 --rc geninfo_unexecuted_blocks=1 00:12:12.952 00:12:12.952 ' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:12.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.952 --rc genhtml_branch_coverage=1 00:12:12.952 --rc genhtml_function_coverage=1 00:12:12.952 --rc genhtml_legend=1 00:12:12.952 --rc geninfo_all_blocks=1 00:12:12.952 --rc geninfo_unexecuted_blocks=1 00:12:12.952 00:12:12.952 ' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:12.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.952 --rc genhtml_branch_coverage=1 00:12:12.952 --rc genhtml_function_coverage=1 00:12:12.952 --rc genhtml_legend=1 00:12:12.952 --rc geninfo_all_blocks=1 00:12:12.952 --rc geninfo_unexecuted_blocks=1 00:12:12.952 00:12:12.952 ' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.952 09:14:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:19.657 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:19.657 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.657 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:19.658 Found net devices under 0000:86:00.0: cvl_0_0 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:19.658 Found net devices under 0000:86:00.1: cvl_0_1 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:12:19.658 00:12:19.658 --- 10.0.0.2 ping statistics --- 00:12:19.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.658 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:19.658 00:12:19.658 --- 10.0.0.1 ping statistics --- 00:12:19.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.658 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1045378 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1045378 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 1045378 ']' 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.658 09:14:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.658 [2024-11-19 09:14:19.948981] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:12:19.658 [2024-11-19 09:14:19.949031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.658 [2024-11-19 09:14:20.029492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.658 [2024-11-19 09:14:20.081169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.658 [2024-11-19 09:14:20.081205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.658 [2024-11-19 09:14:20.081212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.659 [2024-11-19 09:14:20.081218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.659 [2024-11-19 09:14:20.081227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.659 [2024-11-19 09:14:20.082683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.659 [2024-11-19 09:14:20.082787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.659 [2024-11-19 09:14:20.082807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.659 [2024-11-19 09:14:20.082809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:19.659 "tick_rate": 2300000000, 00:12:19.659 "poll_groups": [ 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_000", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [] 00:12:19.659 }, 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_001", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [] 00:12:19.659 }, 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_002", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [] 00:12:19.659 }, 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_003", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [] 00:12:19.659 } 00:12:19.659 ] 00:12:19.659 }' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 [2024-11-19 09:14:20.329616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:19.659 "tick_rate": 2300000000, 00:12:19.659 "poll_groups": [ 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_000", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [ 00:12:19.659 { 00:12:19.659 "trtype": "TCP" 00:12:19.659 } 00:12:19.659 ] 00:12:19.659 }, 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_001", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [ 00:12:19.659 { 00:12:19.659 "trtype": "TCP" 00:12:19.659 } 00:12:19.659 ] 00:12:19.659 }, 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_002", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [ 00:12:19.659 { 00:12:19.659 "trtype": "TCP" 00:12:19.659 } 00:12:19.659 ] 00:12:19.659 }, 00:12:19.659 { 00:12:19.659 "name": "nvmf_tgt_poll_group_003", 00:12:19.659 "admin_qpairs": 0, 00:12:19.659 "io_qpairs": 0, 00:12:19.659 "current_admin_qpairs": 0, 00:12:19.659 "current_io_qpairs": 0, 00:12:19.659 "pending_bdev_io": 0, 00:12:19.659 "completed_nvme_io": 0, 00:12:19.659 "transports": [ 00:12:19.659 { 00:12:19.659 "trtype": "TCP" 00:12:19.659 } 00:12:19.659 ] 00:12:19.659 } 00:12:19.659 ] 00:12:19.659 }' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 Malloc1 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.659 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.660 [2024-11-19 09:14:20.505602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:19.660 [2024-11-19 09:14:20.534160] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:19.660 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:19.660 could not add new controller: failed to write to nvme-fabrics device 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.660 09:14:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.038 09:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.038 09:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:21.038 09:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.038 09:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:21.038 09:14:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.977 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.978 [2024-11-19 09:14:23.911682] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:22.978 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.978 could not add new controller: failed to write to nvme-fabrics device 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.978 09:14:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.356 09:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.356 09:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:24.356 09:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.356 09:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:24.356 09:14:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.262 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.263 [2024-11-19 09:14:27.275314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.263 09:14:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.642 09:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.642 09:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:27.642 09:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.642 09:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:27.642 09:14:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.548 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 [2024-11-19 09:14:30.629852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.807 09:14:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.746 09:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.746 09:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:30.746 09:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.746 09:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:30.746 09:14:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.280 [2024-11-19 09:14:33.927500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.280 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.281 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.281 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.281 09:14:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.217 09:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.217 09:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:34.217 09:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.217 09:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:34.217 09:14:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:36.121 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:36.121 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:36.121 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.121 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:36.121 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.121 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.122 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.381 [2024-11-19 09:14:37.190189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.381 09:14:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.759 09:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.759 09:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:37.759 09:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.759 09:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:37.759 09:14:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.661 [2024-11-19 09:14:40.620386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.661 09:14:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.039 09:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.039 09:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:41.039 09:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.039 09:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:41.039 09:14:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.946 09:14:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.206 [2024-11-19 09:14:44.023699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.206 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 [2024-11-19 09:14:44.071803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 [2024-11-19 09:14:44.119955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 [2024-11-19 09:14:44.168143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 [2024-11-19 09:14:44.216301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:43.207 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.208 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:43.467 "tick_rate": 2300000000, 00:12:43.467 "poll_groups": [ 00:12:43.467 { 00:12:43.467 "name": "nvmf_tgt_poll_group_000", 00:12:43.467 "admin_qpairs": 2, 00:12:43.467 "io_qpairs": 168, 00:12:43.467 "current_admin_qpairs": 0, 00:12:43.467 "current_io_qpairs": 0, 00:12:43.467 "pending_bdev_io": 0, 00:12:43.467 "completed_nvme_io": 219, 00:12:43.467 "transports": [ 00:12:43.467 { 00:12:43.467 "trtype": "TCP" 00:12:43.467 } 00:12:43.467 ] 00:12:43.467 }, 00:12:43.467 { 00:12:43.467 "name": "nvmf_tgt_poll_group_001", 00:12:43.467 "admin_qpairs": 2, 00:12:43.467 "io_qpairs": 168, 00:12:43.467 "current_admin_qpairs": 0, 00:12:43.467 "current_io_qpairs": 0, 00:12:43.467 "pending_bdev_io": 0, 00:12:43.467 "completed_nvme_io": 218, 00:12:43.467 "transports": [ 00:12:43.467 { 00:12:43.467 "trtype": "TCP" 00:12:43.467 } 00:12:43.467 ] 00:12:43.467 }, 00:12:43.467 { 00:12:43.467 "name": "nvmf_tgt_poll_group_002", 00:12:43.467 "admin_qpairs": 1, 00:12:43.467 "io_qpairs": 168, 00:12:43.467 "current_admin_qpairs": 0, 00:12:43.467 "current_io_qpairs": 0, 00:12:43.467 "pending_bdev_io": 0, 00:12:43.467 "completed_nvme_io": 267, 00:12:43.467 "transports": [ 00:12:43.467 { 00:12:43.467 "trtype": "TCP" 00:12:43.467 } 00:12:43.467 ] 00:12:43.467 }, 00:12:43.467 { 00:12:43.467 "name": "nvmf_tgt_poll_group_003", 00:12:43.467 "admin_qpairs": 2, 00:12:43.467 "io_qpairs": 168, 00:12:43.467 "current_admin_qpairs": 0, 00:12:43.467 "current_io_qpairs": 0, 00:12:43.467 "pending_bdev_io": 0, 00:12:43.467 "completed_nvme_io": 318, 00:12:43.467 "transports": [ 00:12:43.467 { 00:12:43.467 "trtype": "TCP" 00:12:43.467 } 00:12:43.467 ] 00:12:43.467 } 00:12:43.467 ] 00:12:43.467 }' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:43.467 rmmod nvme_tcp 00:12:43.467 rmmod nvme_fabrics 00:12:43.467 rmmod nvme_keyring 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1045378 ']' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1045378 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 1045378 ']' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 1045378 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1045378 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:43.467 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:43.468 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1045378' 00:12:43.468 killing process with pid 1045378 00:12:43.468 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 1045378 00:12:43.468 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 1045378 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.728 09:14:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:46.264 00:12:46.264 real 0m33.003s 00:12:46.264 user 1m39.649s 00:12:46.264 sys 0m6.495s 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 ************************************ 00:12:46.264 END TEST nvmf_rpc 00:12:46.264 ************************************ 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.264 ************************************ 00:12:46.264 START TEST nvmf_invalid 00:12:46.264 ************************************ 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.264 * Looking for test storage... 00:12:46.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:46.264 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.265 --rc genhtml_branch_coverage=1 00:12:46.265 --rc genhtml_function_coverage=1 00:12:46.265 --rc genhtml_legend=1 00:12:46.265 --rc geninfo_all_blocks=1 00:12:46.265 --rc geninfo_unexecuted_blocks=1 00:12:46.265 00:12:46.265 ' 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.265 --rc genhtml_branch_coverage=1 00:12:46.265 --rc genhtml_function_coverage=1 00:12:46.265 --rc genhtml_legend=1 00:12:46.265 --rc geninfo_all_blocks=1 00:12:46.265 --rc geninfo_unexecuted_blocks=1 00:12:46.265 00:12:46.265 ' 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.265 --rc genhtml_branch_coverage=1 00:12:46.265 --rc genhtml_function_coverage=1 00:12:46.265 --rc genhtml_legend=1 00:12:46.265 --rc geninfo_all_blocks=1 00:12:46.265 --rc geninfo_unexecuted_blocks=1 00:12:46.265 00:12:46.265 ' 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:46.265 --rc genhtml_branch_coverage=1 00:12:46.265 --rc genhtml_function_coverage=1 00:12:46.265 --rc genhtml_legend=1 00:12:46.265 --rc geninfo_all_blocks=1 00:12:46.265 --rc geninfo_unexecuted_blocks=1 00:12:46.265 00:12:46.265 ' 00:12:46.265 09:14:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:46.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:46.265 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:46.266 09:14:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.836 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.837 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.837 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.837 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:12:52.837 00:12:52.837 --- 10.0.0.2 ping statistics --- 00:12:52.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.837 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:52.837 00:12:52.837 --- 10.0.0.1 ping statistics --- 00:12:52.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.837 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1053087 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1053087 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 1053087 ']' 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.837 09:14:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 [2024-11-19 09:14:53.028925] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:12:52.838 [2024-11-19 09:14:53.028993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.838 [2024-11-19 09:14:53.118725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.838 [2024-11-19 09:14:53.161600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.838 [2024-11-19 09:14:53.161636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.838 [2024-11-19 09:14:53.161643] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.838 [2024-11-19 09:14:53.161649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.838 [2024-11-19 09:14:53.161654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.838 [2024-11-19 09:14:53.163206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.838 [2024-11-19 09:14:53.163316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.838 [2024-11-19 09:14:53.163424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.838 [2024-11-19 09:14:53.163425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9503 00:12:52.838 [2024-11-19 09:14:53.465359] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:52.838 { 00:12:52.838 "nqn": "nqn.2016-06.io.spdk:cnode9503", 00:12:52.838 "tgt_name": "foobar", 00:12:52.838 "method": "nvmf_create_subsystem", 00:12:52.838 "req_id": 1 00:12:52.838 } 00:12:52.838 Got JSON-RPC error response 00:12:52.838 response: 00:12:52.838 { 00:12:52.838 "code": -32603, 00:12:52.838 "message": "Unable to find target foobar" 00:12:52.838 }' 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:52.838 { 00:12:52.838 "nqn": "nqn.2016-06.io.spdk:cnode9503", 00:12:52.838 "tgt_name": "foobar", 00:12:52.838 "method": "nvmf_create_subsystem", 00:12:52.838 "req_id": 1 00:12:52.838 } 00:12:52.838 Got JSON-RPC error response 00:12:52.838 response: 00:12:52.838 { 00:12:52.838 "code": -32603, 00:12:52.838 "message": "Unable to find target foobar" 00:12:52.838 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3442 00:12:52.838 [2024-11-19 09:14:53.674073] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3442: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:52.838 { 00:12:52.838 "nqn": "nqn.2016-06.io.spdk:cnode3442", 00:12:52.838 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:52.838 "method": "nvmf_create_subsystem", 00:12:52.838 "req_id": 1 00:12:52.838 } 00:12:52.838 Got JSON-RPC error response 00:12:52.838 response: 00:12:52.838 { 00:12:52.838 "code": -32602, 00:12:52.838 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:52.838 }' 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:52.838 { 00:12:52.838 "nqn": "nqn.2016-06.io.spdk:cnode3442", 00:12:52.838 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:52.838 "method": "nvmf_create_subsystem", 00:12:52.838 "req_id": 1 00:12:52.838 } 00:12:52.838 Got JSON-RPC error response 00:12:52.838 response: 00:12:52.838 { 00:12:52.838 "code": -32602, 00:12:52.838 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:52.838 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:52.838 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22060 00:12:52.838 [2024-11-19 09:14:53.890789] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22060: invalid model number 'SPDK_Controller' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:53.097 { 00:12:53.097 "nqn": "nqn.2016-06.io.spdk:cnode22060", 00:12:53.097 "model_number": "SPDK_Controller\u001f", 00:12:53.097 "method": "nvmf_create_subsystem", 00:12:53.097 "req_id": 1 00:12:53.097 } 00:12:53.097 Got JSON-RPC error response 00:12:53.097 response: 00:12:53.097 { 00:12:53.097 "code": -32602, 00:12:53.097 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.097 }' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:53.097 { 00:12:53.097 "nqn": "nqn.2016-06.io.spdk:cnode22060", 00:12:53.097 "model_number": "SPDK_Controller\u001f", 00:12:53.097 "method": "nvmf_create_subsystem", 00:12:53.097 "req_id": 1 00:12:53.097 } 00:12:53.097 Got JSON-RPC error response 00:12:53.097 response: 00:12:53.097 { 00:12:53.097 "code": -32602, 00:12:53.097 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.097 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:53.097 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H9H)E@k3m$Tj)5;v~nA7' 00:12:53.098 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'H9H)E@k3m$Tj)5;v~nA7' nqn.2016-06.io.spdk:cnode27324 00:12:53.356 [2024-11-19 09:14:54.235987] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27324: invalid serial number 'H9H)E@k3m$Tj)5;v~nA7' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:53.356 { 00:12:53.356 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:12:53.356 "serial_number": "H9H)E@k3m$Tj)5;v~\u007fnA7", 00:12:53.356 "method": "nvmf_create_subsystem", 00:12:53.356 "req_id": 1 00:12:53.356 } 00:12:53.356 Got JSON-RPC error response 00:12:53.356 response: 00:12:53.356 { 00:12:53.356 "code": -32602, 00:12:53.356 "message": "Invalid SN H9H)E@k3m$Tj)5;v~\u007fnA7" 00:12:53.356 }' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:53.356 { 00:12:53.356 "nqn": "nqn.2016-06.io.spdk:cnode27324", 00:12:53.356 "serial_number": "H9H)E@k3m$Tj)5;v~\u007fnA7", 00:12:53.356 "method": "nvmf_create_subsystem", 00:12:53.356 "req_id": 1 00:12:53.356 } 00:12:53.356 Got JSON-RPC error response 00:12:53.356 response: 00:12:53.356 { 00:12:53.356 "code": -32602, 00:12:53.356 "message": "Invalid SN H9H)E@k3m$Tj)5;v~\u007fnA7" 00:12:53.356 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.356 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:53.357 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:53.615 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:12:53.616 09:14:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'qd`N{N4VCjBfR=n:n3|$hjR.N=Pp6!"6ql.-2% /dev/null' 00:12:55.939 09:14:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.845 09:14:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.845 00:12:57.845 real 0m12.061s 00:12:57.845 user 0m18.747s 00:12:57.845 sys 0m5.483s 00:12:57.845 09:14:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:57.845 09:14:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.845 ************************************ 00:12:57.845 END TEST nvmf_invalid 00:12:57.845 ************************************ 00:12:58.106 09:14:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:58.106 09:14:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:58.106 09:14:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:58.106 09:14:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.106 ************************************ 00:12:58.106 START TEST nvmf_connect_stress 00:12:58.106 ************************************ 00:12:58.106 09:14:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:58.106 * Looking for test storage... 00:12:58.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:58.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.106 --rc genhtml_branch_coverage=1 00:12:58.106 --rc genhtml_function_coverage=1 00:12:58.106 --rc genhtml_legend=1 00:12:58.106 --rc geninfo_all_blocks=1 00:12:58.106 --rc geninfo_unexecuted_blocks=1 00:12:58.106 00:12:58.106 ' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.106 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.107 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.366 09:14:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:04.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:04.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:04.935 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:04.936 Found net devices under 0000:86:00.0: cvl_0_0 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:04.936 Found net devices under 0000:86:00.1: cvl_0_1 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:04.936 09:15:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:04.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:04.936 00:13:04.936 --- 10.0.0.2 ping statistics --- 00:13:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.936 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:13:04.936 00:13:04.936 --- 10.0.0.1 ping statistics --- 00:13:04.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.936 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1057499 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1057499 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1057499 ']' 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 [2024-11-19 09:15:05.197061] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:13:04.936 [2024-11-19 09:15:05.197107] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.936 [2024-11-19 09:15:05.276625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.936 [2024-11-19 09:15:05.318517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.936 [2024-11-19 09:15:05.318554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.936 [2024-11-19 09:15:05.318562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.936 [2024-11-19 09:15:05.318569] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.936 [2024-11-19 09:15:05.318574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.936 [2024-11-19 09:15:05.320056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.936 [2024-11-19 09:15:05.320164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.936 [2024-11-19 09:15:05.320164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.936 [2024-11-19 09:15:05.460839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:04.936 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.937 [2024-11-19 09:15:05.481051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.937 NULL1 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1057791 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.937 09:15:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.194 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.194 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:05.194 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.194 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.194 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.756 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.756 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:05.756 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.756 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.756 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.013 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.013 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:06.013 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.013 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.013 09:15:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.269 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.269 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:06.269 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.269 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.269 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.525 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.525 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:06.525 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.525 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.525 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.089 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.089 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:07.089 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.089 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.089 09:15:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.346 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.346 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:07.346 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.346 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.346 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.603 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.603 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:07.603 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.603 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.603 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.861 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.861 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:07.861 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.861 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.861 09:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.119 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.119 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:08.119 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.119 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.119 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.683 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.683 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:08.683 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.683 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.683 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.941 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.941 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:08.941 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.941 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.941 09:15:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.198 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.198 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:09.198 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.198 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.198 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.455 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.455 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:09.455 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.455 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.455 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.020 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.020 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:10.020 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.020 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.020 09:15:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.277 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.277 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:10.277 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.277 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.277 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.535 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.535 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:10.535 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.535 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.535 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.792 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.792 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:10.792 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.792 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.792 09:15:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.051 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.051 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:11.051 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.051 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.051 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.616 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.616 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:11.616 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.616 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.616 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.874 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.874 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:11.874 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.874 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.874 09:15:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.131 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.131 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:12.131 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.131 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.131 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.389 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.389 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:12.389 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.389 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.389 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.953 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.953 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:12.953 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.953 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.953 09:15:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.209 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.209 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:13.209 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.209 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.209 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.468 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.468 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:13.468 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.468 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.468 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.725 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.725 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:13.725 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.725 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.725 09:15:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.983 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.983 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:13.983 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.983 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.983 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.547 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.547 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:14.547 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.547 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.547 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.805 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.805 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:14.805 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.805 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.805 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.805 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:15.064 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.064 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1057791 00:13:15.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1057791) - No such process 00:13:15.064 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1057791 00:13:15.064 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:15.064 09:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:15.064 rmmod nvme_tcp 00:13:15.064 rmmod nvme_fabrics 00:13:15.064 rmmod nvme_keyring 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1057499 ']' 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1057499 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1057499 ']' 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1057499 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1057499 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1057499' 00:13:15.064 killing process with pid 1057499 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1057499 00:13:15.064 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1057499 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.323 09:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:17.861 00:13:17.861 real 0m19.392s 00:13:17.861 user 0m40.236s 00:13:17.861 sys 0m8.774s 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.861 ************************************ 00:13:17.861 END TEST nvmf_connect_stress 00:13:17.861 ************************************ 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:17.861 ************************************ 00:13:17.861 START TEST nvmf_fused_ordering 00:13:17.861 ************************************ 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:17.861 * Looking for test storage... 00:13:17.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.861 --rc genhtml_branch_coverage=1 00:13:17.861 --rc genhtml_function_coverage=1 00:13:17.861 --rc genhtml_legend=1 00:13:17.861 --rc geninfo_all_blocks=1 00:13:17.861 --rc geninfo_unexecuted_blocks=1 00:13:17.861 00:13:17.861 ' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.861 --rc genhtml_branch_coverage=1 00:13:17.861 --rc genhtml_function_coverage=1 00:13:17.861 --rc genhtml_legend=1 00:13:17.861 --rc geninfo_all_blocks=1 00:13:17.861 --rc geninfo_unexecuted_blocks=1 00:13:17.861 00:13:17.861 ' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.861 --rc genhtml_branch_coverage=1 00:13:17.861 --rc genhtml_function_coverage=1 00:13:17.861 --rc genhtml_legend=1 00:13:17.861 --rc geninfo_all_blocks=1 00:13:17.861 --rc geninfo_unexecuted_blocks=1 00:13:17.861 00:13:17.861 ' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.861 --rc genhtml_branch_coverage=1 00:13:17.861 --rc genhtml_function_coverage=1 00:13:17.861 --rc genhtml_legend=1 00:13:17.861 --rc geninfo_all_blocks=1 00:13:17.861 --rc geninfo_unexecuted_blocks=1 00:13:17.861 00:13:17.861 ' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.861 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:17.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:17.862 09:15:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:24.435 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:24.435 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:24.435 Found net devices under 0000:86:00.0: cvl_0_0 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:24.435 Found net devices under 0000:86:00.1: cvl_0_1 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.435 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:13:24.436 00:13:24.436 --- 10.0.0.2 ping statistics --- 00:13:24.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.436 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:13:24.436 00:13:24.436 --- 10.0.0.1 ping statistics --- 00:13:24.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.436 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1063176 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1063176 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1063176 ']' 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 [2024-11-19 09:15:24.663119] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:13:24.436 [2024-11-19 09:15:24.663169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.436 [2024-11-19 09:15:24.742282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.436 [2024-11-19 09:15:24.783700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.436 [2024-11-19 09:15:24.783736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.436 [2024-11-19 09:15:24.783743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.436 [2024-11-19 09:15:24.783749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.436 [2024-11-19 09:15:24.783754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.436 [2024-11-19 09:15:24.784292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 [2024-11-19 09:15:24.914900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 [2024-11-19 09:15:24.935072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 NULL1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.436 09:15:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:24.436 [2024-11-19 09:15:24.989906] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:13:24.436 [2024-11-19 09:15:24.989960] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063208 ] 00:13:24.436 Attached to nqn.2016-06.io.spdk:cnode1 00:13:24.436 Namespace ID: 1 size: 1GB 00:13:24.436 fused_ordering(0) 00:13:24.436 fused_ordering(1) 00:13:24.436 fused_ordering(2) 00:13:24.436 fused_ordering(3) 00:13:24.436 fused_ordering(4) 00:13:24.436 fused_ordering(5) 00:13:24.437 fused_ordering(6) 00:13:24.437 fused_ordering(7) 00:13:24.437 fused_ordering(8) 00:13:24.437 fused_ordering(9) 00:13:24.437 fused_ordering(10) 00:13:24.437 fused_ordering(11) 00:13:24.437 fused_ordering(12) 00:13:24.437 fused_ordering(13) 00:13:24.437 fused_ordering(14) 00:13:24.437 fused_ordering(15) 00:13:24.437 fused_ordering(16) 00:13:24.437 fused_ordering(17) 00:13:24.437 fused_ordering(18) 00:13:24.437 fused_ordering(19) 00:13:24.437 fused_ordering(20) 00:13:24.437 fused_ordering(21) 00:13:24.437 fused_ordering(22) 00:13:24.437 fused_ordering(23) 00:13:24.437 fused_ordering(24) 00:13:24.437 fused_ordering(25) 00:13:24.437 fused_ordering(26) 00:13:24.437 fused_ordering(27) 00:13:24.437 fused_ordering(28) 00:13:24.437 fused_ordering(29) 00:13:24.437 fused_ordering(30) 00:13:24.437 fused_ordering(31) 00:13:24.437 fused_ordering(32) 00:13:24.437 fused_ordering(33) 00:13:24.437 fused_ordering(34) 00:13:24.437 fused_ordering(35) 00:13:24.437 fused_ordering(36) 00:13:24.437 fused_ordering(37) 00:13:24.437 fused_ordering(38) 00:13:24.437 fused_ordering(39) 00:13:24.437 fused_ordering(40) 00:13:24.437 fused_ordering(41) 00:13:24.437 fused_ordering(42) 00:13:24.437 fused_ordering(43) 00:13:24.437 fused_ordering(44) 00:13:24.437 fused_ordering(45) 00:13:24.437 fused_ordering(46) 00:13:24.437 fused_ordering(47) 00:13:24.437 fused_ordering(48) 00:13:24.437 fused_ordering(49) 00:13:24.437 fused_ordering(50) 00:13:24.437 fused_ordering(51) 00:13:24.437 fused_ordering(52) 00:13:24.437 fused_ordering(53) 00:13:24.437 fused_ordering(54) 00:13:24.437 fused_ordering(55) 00:13:24.437 fused_ordering(56) 00:13:24.437 fused_ordering(57) 00:13:24.437 fused_ordering(58) 00:13:24.437 fused_ordering(59) 00:13:24.437 fused_ordering(60) 00:13:24.437 fused_ordering(61) 00:13:24.437 fused_ordering(62) 00:13:24.437 fused_ordering(63) 00:13:24.437 fused_ordering(64) 00:13:24.437 fused_ordering(65) 00:13:24.437 fused_ordering(66) 00:13:24.437 fused_ordering(67) 00:13:24.437 fused_ordering(68) 00:13:24.437 fused_ordering(69) 00:13:24.437 fused_ordering(70) 00:13:24.437 fused_ordering(71) 00:13:24.437 fused_ordering(72) 00:13:24.437 fused_ordering(73) 00:13:24.437 fused_ordering(74) 00:13:24.437 fused_ordering(75) 00:13:24.437 fused_ordering(76) 00:13:24.437 fused_ordering(77) 00:13:24.437 fused_ordering(78) 00:13:24.437 fused_ordering(79) 00:13:24.437 fused_ordering(80) 00:13:24.437 fused_ordering(81) 00:13:24.437 fused_ordering(82) 00:13:24.437 fused_ordering(83) 00:13:24.437 fused_ordering(84) 00:13:24.437 fused_ordering(85) 00:13:24.437 fused_ordering(86) 00:13:24.437 fused_ordering(87) 00:13:24.437 fused_ordering(88) 00:13:24.437 fused_ordering(89) 00:13:24.437 fused_ordering(90) 00:13:24.437 fused_ordering(91) 00:13:24.437 fused_ordering(92) 00:13:24.437 fused_ordering(93) 00:13:24.437 fused_ordering(94) 00:13:24.437 fused_ordering(95) 00:13:24.437 fused_ordering(96) 00:13:24.437 fused_ordering(97) 00:13:24.437 fused_ordering(98) 00:13:24.437 fused_ordering(99) 00:13:24.437 fused_ordering(100) 00:13:24.437 fused_ordering(101) 00:13:24.437 fused_ordering(102) 00:13:24.437 fused_ordering(103) 00:13:24.437 fused_ordering(104) 00:13:24.437 fused_ordering(105) 00:13:24.437 fused_ordering(106) 00:13:24.437 fused_ordering(107) 00:13:24.437 fused_ordering(108) 00:13:24.437 fused_ordering(109) 00:13:24.437 fused_ordering(110) 00:13:24.437 fused_ordering(111) 00:13:24.437 fused_ordering(112) 00:13:24.437 fused_ordering(113) 00:13:24.437 fused_ordering(114) 00:13:24.437 fused_ordering(115) 00:13:24.437 fused_ordering(116) 00:13:24.437 fused_ordering(117) 00:13:24.437 fused_ordering(118) 00:13:24.437 fused_ordering(119) 00:13:24.437 fused_ordering(120) 00:13:24.437 fused_ordering(121) 00:13:24.437 fused_ordering(122) 00:13:24.437 fused_ordering(123) 00:13:24.437 fused_ordering(124) 00:13:24.437 fused_ordering(125) 00:13:24.437 fused_ordering(126) 00:13:24.437 fused_ordering(127) 00:13:24.437 fused_ordering(128) 00:13:24.437 fused_ordering(129) 00:13:24.437 fused_ordering(130) 00:13:24.437 fused_ordering(131) 00:13:24.437 fused_ordering(132) 00:13:24.437 fused_ordering(133) 00:13:24.437 fused_ordering(134) 00:13:24.437 fused_ordering(135) 00:13:24.437 fused_ordering(136) 00:13:24.437 fused_ordering(137) 00:13:24.437 fused_ordering(138) 00:13:24.437 fused_ordering(139) 00:13:24.437 fused_ordering(140) 00:13:24.437 fused_ordering(141) 00:13:24.437 fused_ordering(142) 00:13:24.437 fused_ordering(143) 00:13:24.437 fused_ordering(144) 00:13:24.437 fused_ordering(145) 00:13:24.437 fused_ordering(146) 00:13:24.437 fused_ordering(147) 00:13:24.437 fused_ordering(148) 00:13:24.437 fused_ordering(149) 00:13:24.437 fused_ordering(150) 00:13:24.437 fused_ordering(151) 00:13:24.437 fused_ordering(152) 00:13:24.437 fused_ordering(153) 00:13:24.437 fused_ordering(154) 00:13:24.437 fused_ordering(155) 00:13:24.437 fused_ordering(156) 00:13:24.437 fused_ordering(157) 00:13:24.437 fused_ordering(158) 00:13:24.437 fused_ordering(159) 00:13:24.437 fused_ordering(160) 00:13:24.437 fused_ordering(161) 00:13:24.437 fused_ordering(162) 00:13:24.437 fused_ordering(163) 00:13:24.437 fused_ordering(164) 00:13:24.437 fused_ordering(165) 00:13:24.437 fused_ordering(166) 00:13:24.437 fused_ordering(167) 00:13:24.437 fused_ordering(168) 00:13:24.437 fused_ordering(169) 00:13:24.437 fused_ordering(170) 00:13:24.437 fused_ordering(171) 00:13:24.437 fused_ordering(172) 00:13:24.437 fused_ordering(173) 00:13:24.437 fused_ordering(174) 00:13:24.437 fused_ordering(175) 00:13:24.437 fused_ordering(176) 00:13:24.437 fused_ordering(177) 00:13:24.437 fused_ordering(178) 00:13:24.437 fused_ordering(179) 00:13:24.437 fused_ordering(180) 00:13:24.437 fused_ordering(181) 00:13:24.437 fused_ordering(182) 00:13:24.437 fused_ordering(183) 00:13:24.437 fused_ordering(184) 00:13:24.437 fused_ordering(185) 00:13:24.437 fused_ordering(186) 00:13:24.437 fused_ordering(187) 00:13:24.437 fused_ordering(188) 00:13:24.437 fused_ordering(189) 00:13:24.437 fused_ordering(190) 00:13:24.437 fused_ordering(191) 00:13:24.437 fused_ordering(192) 00:13:24.437 fused_ordering(193) 00:13:24.437 fused_ordering(194) 00:13:24.437 fused_ordering(195) 00:13:24.437 fused_ordering(196) 00:13:24.437 fused_ordering(197) 00:13:24.437 fused_ordering(198) 00:13:24.437 fused_ordering(199) 00:13:24.437 fused_ordering(200) 00:13:24.437 fused_ordering(201) 00:13:24.437 fused_ordering(202) 00:13:24.437 fused_ordering(203) 00:13:24.437 fused_ordering(204) 00:13:24.437 fused_ordering(205) 00:13:24.696 fused_ordering(206) 00:13:24.696 fused_ordering(207) 00:13:24.696 fused_ordering(208) 00:13:24.696 fused_ordering(209) 00:13:24.696 fused_ordering(210) 00:13:24.696 fused_ordering(211) 00:13:24.696 fused_ordering(212) 00:13:24.696 fused_ordering(213) 00:13:24.696 fused_ordering(214) 00:13:24.696 fused_ordering(215) 00:13:24.696 fused_ordering(216) 00:13:24.696 fused_ordering(217) 00:13:24.696 fused_ordering(218) 00:13:24.696 fused_ordering(219) 00:13:24.696 fused_ordering(220) 00:13:24.696 fused_ordering(221) 00:13:24.696 fused_ordering(222) 00:13:24.696 fused_ordering(223) 00:13:24.696 fused_ordering(224) 00:13:24.696 fused_ordering(225) 00:13:24.696 fused_ordering(226) 00:13:24.696 fused_ordering(227) 00:13:24.696 fused_ordering(228) 00:13:24.696 fused_ordering(229) 00:13:24.696 fused_ordering(230) 00:13:24.696 fused_ordering(231) 00:13:24.696 fused_ordering(232) 00:13:24.696 fused_ordering(233) 00:13:24.696 fused_ordering(234) 00:13:24.696 fused_ordering(235) 00:13:24.696 fused_ordering(236) 00:13:24.696 fused_ordering(237) 00:13:24.696 fused_ordering(238) 00:13:24.696 fused_ordering(239) 00:13:24.696 fused_ordering(240) 00:13:24.696 fused_ordering(241) 00:13:24.696 fused_ordering(242) 00:13:24.696 fused_ordering(243) 00:13:24.696 fused_ordering(244) 00:13:24.696 fused_ordering(245) 00:13:24.696 fused_ordering(246) 00:13:24.696 fused_ordering(247) 00:13:24.696 fused_ordering(248) 00:13:24.696 fused_ordering(249) 00:13:24.696 fused_ordering(250) 00:13:24.696 fused_ordering(251) 00:13:24.696 fused_ordering(252) 00:13:24.696 fused_ordering(253) 00:13:24.696 fused_ordering(254) 00:13:24.696 fused_ordering(255) 00:13:24.696 fused_ordering(256) 00:13:24.696 fused_ordering(257) 00:13:24.696 fused_ordering(258) 00:13:24.696 fused_ordering(259) 00:13:24.696 fused_ordering(260) 00:13:24.696 fused_ordering(261) 00:13:24.696 fused_ordering(262) 00:13:24.696 fused_ordering(263) 00:13:24.696 fused_ordering(264) 00:13:24.696 fused_ordering(265) 00:13:24.696 fused_ordering(266) 00:13:24.696 fused_ordering(267) 00:13:24.696 fused_ordering(268) 00:13:24.696 fused_ordering(269) 00:13:24.696 fused_ordering(270) 00:13:24.696 fused_ordering(271) 00:13:24.696 fused_ordering(272) 00:13:24.696 fused_ordering(273) 00:13:24.696 fused_ordering(274) 00:13:24.696 fused_ordering(275) 00:13:24.696 fused_ordering(276) 00:13:24.696 fused_ordering(277) 00:13:24.696 fused_ordering(278) 00:13:24.696 fused_ordering(279) 00:13:24.696 fused_ordering(280) 00:13:24.696 fused_ordering(281) 00:13:24.696 fused_ordering(282) 00:13:24.696 fused_ordering(283) 00:13:24.696 fused_ordering(284) 00:13:24.696 fused_ordering(285) 00:13:24.696 fused_ordering(286) 00:13:24.696 fused_ordering(287) 00:13:24.696 fused_ordering(288) 00:13:24.696 fused_ordering(289) 00:13:24.696 fused_ordering(290) 00:13:24.696 fused_ordering(291) 00:13:24.696 fused_ordering(292) 00:13:24.696 fused_ordering(293) 00:13:24.696 fused_ordering(294) 00:13:24.696 fused_ordering(295) 00:13:24.696 fused_ordering(296) 00:13:24.696 fused_ordering(297) 00:13:24.696 fused_ordering(298) 00:13:24.696 fused_ordering(299) 00:13:24.696 fused_ordering(300) 00:13:24.696 fused_ordering(301) 00:13:24.696 fused_ordering(302) 00:13:24.696 fused_ordering(303) 00:13:24.696 fused_ordering(304) 00:13:24.696 fused_ordering(305) 00:13:24.696 fused_ordering(306) 00:13:24.696 fused_ordering(307) 00:13:24.696 fused_ordering(308) 00:13:24.696 fused_ordering(309) 00:13:24.696 fused_ordering(310) 00:13:24.696 fused_ordering(311) 00:13:24.696 fused_ordering(312) 00:13:24.696 fused_ordering(313) 00:13:24.696 fused_ordering(314) 00:13:24.696 fused_ordering(315) 00:13:24.696 fused_ordering(316) 00:13:24.696 fused_ordering(317) 00:13:24.696 fused_ordering(318) 00:13:24.696 fused_ordering(319) 00:13:24.696 fused_ordering(320) 00:13:24.696 fused_ordering(321) 00:13:24.696 fused_ordering(322) 00:13:24.696 fused_ordering(323) 00:13:24.696 fused_ordering(324) 00:13:24.696 fused_ordering(325) 00:13:24.696 fused_ordering(326) 00:13:24.696 fused_ordering(327) 00:13:24.696 fused_ordering(328) 00:13:24.696 fused_ordering(329) 00:13:24.696 fused_ordering(330) 00:13:24.696 fused_ordering(331) 00:13:24.696 fused_ordering(332) 00:13:24.696 fused_ordering(333) 00:13:24.696 fused_ordering(334) 00:13:24.696 fused_ordering(335) 00:13:24.696 fused_ordering(336) 00:13:24.696 fused_ordering(337) 00:13:24.696 fused_ordering(338) 00:13:24.696 fused_ordering(339) 00:13:24.696 fused_ordering(340) 00:13:24.697 fused_ordering(341) 00:13:24.697 fused_ordering(342) 00:13:24.697 fused_ordering(343) 00:13:24.697 fused_ordering(344) 00:13:24.697 fused_ordering(345) 00:13:24.697 fused_ordering(346) 00:13:24.697 fused_ordering(347) 00:13:24.697 fused_ordering(348) 00:13:24.697 fused_ordering(349) 00:13:24.697 fused_ordering(350) 00:13:24.697 fused_ordering(351) 00:13:24.697 fused_ordering(352) 00:13:24.697 fused_ordering(353) 00:13:24.697 fused_ordering(354) 00:13:24.697 fused_ordering(355) 00:13:24.697 fused_ordering(356) 00:13:24.697 fused_ordering(357) 00:13:24.697 fused_ordering(358) 00:13:24.697 fused_ordering(359) 00:13:24.697 fused_ordering(360) 00:13:24.697 fused_ordering(361) 00:13:24.697 fused_ordering(362) 00:13:24.697 fused_ordering(363) 00:13:24.697 fused_ordering(364) 00:13:24.697 fused_ordering(365) 00:13:24.697 fused_ordering(366) 00:13:24.697 fused_ordering(367) 00:13:24.697 fused_ordering(368) 00:13:24.697 fused_ordering(369) 00:13:24.697 fused_ordering(370) 00:13:24.697 fused_ordering(371) 00:13:24.697 fused_ordering(372) 00:13:24.697 fused_ordering(373) 00:13:24.697 fused_ordering(374) 00:13:24.697 fused_ordering(375) 00:13:24.697 fused_ordering(376) 00:13:24.697 fused_ordering(377) 00:13:24.697 fused_ordering(378) 00:13:24.697 fused_ordering(379) 00:13:24.697 fused_ordering(380) 00:13:24.697 fused_ordering(381) 00:13:24.697 fused_ordering(382) 00:13:24.697 fused_ordering(383) 00:13:24.697 fused_ordering(384) 00:13:24.697 fused_ordering(385) 00:13:24.697 fused_ordering(386) 00:13:24.697 fused_ordering(387) 00:13:24.697 fused_ordering(388) 00:13:24.697 fused_ordering(389) 00:13:24.697 fused_ordering(390) 00:13:24.697 fused_ordering(391) 00:13:24.697 fused_ordering(392) 00:13:24.697 fused_ordering(393) 00:13:24.697 fused_ordering(394) 00:13:24.697 fused_ordering(395) 00:13:24.697 fused_ordering(396) 00:13:24.697 fused_ordering(397) 00:13:24.697 fused_ordering(398) 00:13:24.697 fused_ordering(399) 00:13:24.697 fused_ordering(400) 00:13:24.697 fused_ordering(401) 00:13:24.697 fused_ordering(402) 00:13:24.697 fused_ordering(403) 00:13:24.697 fused_ordering(404) 00:13:24.697 fused_ordering(405) 00:13:24.697 fused_ordering(406) 00:13:24.697 fused_ordering(407) 00:13:24.697 fused_ordering(408) 00:13:24.697 fused_ordering(409) 00:13:24.697 fused_ordering(410) 00:13:24.956 fused_ordering(411) 00:13:24.956 fused_ordering(412) 00:13:24.956 fused_ordering(413) 00:13:24.956 fused_ordering(414) 00:13:24.956 fused_ordering(415) 00:13:24.956 fused_ordering(416) 00:13:24.956 fused_ordering(417) 00:13:24.956 fused_ordering(418) 00:13:24.956 fused_ordering(419) 00:13:24.956 fused_ordering(420) 00:13:24.956 fused_ordering(421) 00:13:24.956 fused_ordering(422) 00:13:24.956 fused_ordering(423) 00:13:24.956 fused_ordering(424) 00:13:24.956 fused_ordering(425) 00:13:24.956 fused_ordering(426) 00:13:24.956 fused_ordering(427) 00:13:24.956 fused_ordering(428) 00:13:24.956 fused_ordering(429) 00:13:24.956 fused_ordering(430) 00:13:24.956 fused_ordering(431) 00:13:24.956 fused_ordering(432) 00:13:24.956 fused_ordering(433) 00:13:24.956 fused_ordering(434) 00:13:24.956 fused_ordering(435) 00:13:24.956 fused_ordering(436) 00:13:24.956 fused_ordering(437) 00:13:24.956 fused_ordering(438) 00:13:24.956 fused_ordering(439) 00:13:24.957 fused_ordering(440) 00:13:24.957 fused_ordering(441) 00:13:24.957 fused_ordering(442) 00:13:24.957 fused_ordering(443) 00:13:24.957 fused_ordering(444) 00:13:24.957 fused_ordering(445) 00:13:24.957 fused_ordering(446) 00:13:24.957 fused_ordering(447) 00:13:24.957 fused_ordering(448) 00:13:24.957 fused_ordering(449) 00:13:24.957 fused_ordering(450) 00:13:24.957 fused_ordering(451) 00:13:24.957 fused_ordering(452) 00:13:24.957 fused_ordering(453) 00:13:24.957 fused_ordering(454) 00:13:24.957 fused_ordering(455) 00:13:24.957 fused_ordering(456) 00:13:24.957 fused_ordering(457) 00:13:24.957 fused_ordering(458) 00:13:24.957 fused_ordering(459) 00:13:24.957 fused_ordering(460) 00:13:24.957 fused_ordering(461) 00:13:24.957 fused_ordering(462) 00:13:24.957 fused_ordering(463) 00:13:24.957 fused_ordering(464) 00:13:24.957 fused_ordering(465) 00:13:24.957 fused_ordering(466) 00:13:24.957 fused_ordering(467) 00:13:24.957 fused_ordering(468) 00:13:24.957 fused_ordering(469) 00:13:24.957 fused_ordering(470) 00:13:24.957 fused_ordering(471) 00:13:24.957 fused_ordering(472) 00:13:24.957 fused_ordering(473) 00:13:24.957 fused_ordering(474) 00:13:24.957 fused_ordering(475) 00:13:24.957 fused_ordering(476) 00:13:24.957 fused_ordering(477) 00:13:24.957 fused_ordering(478) 00:13:24.957 fused_ordering(479) 00:13:24.957 fused_ordering(480) 00:13:24.957 fused_ordering(481) 00:13:24.957 fused_ordering(482) 00:13:24.957 fused_ordering(483) 00:13:24.957 fused_ordering(484) 00:13:24.957 fused_ordering(485) 00:13:24.957 fused_ordering(486) 00:13:24.957 fused_ordering(487) 00:13:24.957 fused_ordering(488) 00:13:24.957 fused_ordering(489) 00:13:24.957 fused_ordering(490) 00:13:24.957 fused_ordering(491) 00:13:24.957 fused_ordering(492) 00:13:24.957 fused_ordering(493) 00:13:24.957 fused_ordering(494) 00:13:24.957 fused_ordering(495) 00:13:24.957 fused_ordering(496) 00:13:24.957 fused_ordering(497) 00:13:24.957 fused_ordering(498) 00:13:24.957 fused_ordering(499) 00:13:24.957 fused_ordering(500) 00:13:24.957 fused_ordering(501) 00:13:24.957 fused_ordering(502) 00:13:24.957 fused_ordering(503) 00:13:24.957 fused_ordering(504) 00:13:24.957 fused_ordering(505) 00:13:24.957 fused_ordering(506) 00:13:24.957 fused_ordering(507) 00:13:24.957 fused_ordering(508) 00:13:24.957 fused_ordering(509) 00:13:24.957 fused_ordering(510) 00:13:24.957 fused_ordering(511) 00:13:24.957 fused_ordering(512) 00:13:24.957 fused_ordering(513) 00:13:24.957 fused_ordering(514) 00:13:24.957 fused_ordering(515) 00:13:24.957 fused_ordering(516) 00:13:24.957 fused_ordering(517) 00:13:24.957 fused_ordering(518) 00:13:24.957 fused_ordering(519) 00:13:24.957 fused_ordering(520) 00:13:24.957 fused_ordering(521) 00:13:24.957 fused_ordering(522) 00:13:24.957 fused_ordering(523) 00:13:24.957 fused_ordering(524) 00:13:24.957 fused_ordering(525) 00:13:24.957 fused_ordering(526) 00:13:24.957 fused_ordering(527) 00:13:24.957 fused_ordering(528) 00:13:24.957 fused_ordering(529) 00:13:24.957 fused_ordering(530) 00:13:24.957 fused_ordering(531) 00:13:24.957 fused_ordering(532) 00:13:24.957 fused_ordering(533) 00:13:24.957 fused_ordering(534) 00:13:24.957 fused_ordering(535) 00:13:24.957 fused_ordering(536) 00:13:24.957 fused_ordering(537) 00:13:24.957 fused_ordering(538) 00:13:24.957 fused_ordering(539) 00:13:24.957 fused_ordering(540) 00:13:24.957 fused_ordering(541) 00:13:24.957 fused_ordering(542) 00:13:24.957 fused_ordering(543) 00:13:24.957 fused_ordering(544) 00:13:24.957 fused_ordering(545) 00:13:24.957 fused_ordering(546) 00:13:24.957 fused_ordering(547) 00:13:24.957 fused_ordering(548) 00:13:24.957 fused_ordering(549) 00:13:24.957 fused_ordering(550) 00:13:24.957 fused_ordering(551) 00:13:24.957 fused_ordering(552) 00:13:24.957 fused_ordering(553) 00:13:24.957 fused_ordering(554) 00:13:24.957 fused_ordering(555) 00:13:24.957 fused_ordering(556) 00:13:24.957 fused_ordering(557) 00:13:24.957 fused_ordering(558) 00:13:24.957 fused_ordering(559) 00:13:24.957 fused_ordering(560) 00:13:24.957 fused_ordering(561) 00:13:24.957 fused_ordering(562) 00:13:24.957 fused_ordering(563) 00:13:24.957 fused_ordering(564) 00:13:24.957 fused_ordering(565) 00:13:24.957 fused_ordering(566) 00:13:24.957 fused_ordering(567) 00:13:24.957 fused_ordering(568) 00:13:24.957 fused_ordering(569) 00:13:24.957 fused_ordering(570) 00:13:24.957 fused_ordering(571) 00:13:24.957 fused_ordering(572) 00:13:24.957 fused_ordering(573) 00:13:24.957 fused_ordering(574) 00:13:24.957 fused_ordering(575) 00:13:24.957 fused_ordering(576) 00:13:24.957 fused_ordering(577) 00:13:24.957 fused_ordering(578) 00:13:24.957 fused_ordering(579) 00:13:24.957 fused_ordering(580) 00:13:24.957 fused_ordering(581) 00:13:24.957 fused_ordering(582) 00:13:24.957 fused_ordering(583) 00:13:24.957 fused_ordering(584) 00:13:24.957 fused_ordering(585) 00:13:24.957 fused_ordering(586) 00:13:24.957 fused_ordering(587) 00:13:24.957 fused_ordering(588) 00:13:24.957 fused_ordering(589) 00:13:24.957 fused_ordering(590) 00:13:24.957 fused_ordering(591) 00:13:24.957 fused_ordering(592) 00:13:24.957 fused_ordering(593) 00:13:24.957 fused_ordering(594) 00:13:24.957 fused_ordering(595) 00:13:24.957 fused_ordering(596) 00:13:24.957 fused_ordering(597) 00:13:24.957 fused_ordering(598) 00:13:24.957 fused_ordering(599) 00:13:24.957 fused_ordering(600) 00:13:24.957 fused_ordering(601) 00:13:24.957 fused_ordering(602) 00:13:24.957 fused_ordering(603) 00:13:24.957 fused_ordering(604) 00:13:24.957 fused_ordering(605) 00:13:24.957 fused_ordering(606) 00:13:24.957 fused_ordering(607) 00:13:24.957 fused_ordering(608) 00:13:24.957 fused_ordering(609) 00:13:24.957 fused_ordering(610) 00:13:24.957 fused_ordering(611) 00:13:24.957 fused_ordering(612) 00:13:24.957 fused_ordering(613) 00:13:24.957 fused_ordering(614) 00:13:24.957 fused_ordering(615) 00:13:25.526 fused_ordering(616) 00:13:25.526 fused_ordering(617) 00:13:25.526 fused_ordering(618) 00:13:25.526 fused_ordering(619) 00:13:25.526 fused_ordering(620) 00:13:25.526 fused_ordering(621) 00:13:25.526 fused_ordering(622) 00:13:25.526 fused_ordering(623) 00:13:25.526 fused_ordering(624) 00:13:25.526 fused_ordering(625) 00:13:25.526 fused_ordering(626) 00:13:25.526 fused_ordering(627) 00:13:25.526 fused_ordering(628) 00:13:25.526 fused_ordering(629) 00:13:25.526 fused_ordering(630) 00:13:25.526 fused_ordering(631) 00:13:25.526 fused_ordering(632) 00:13:25.526 fused_ordering(633) 00:13:25.526 fused_ordering(634) 00:13:25.526 fused_ordering(635) 00:13:25.526 fused_ordering(636) 00:13:25.526 fused_ordering(637) 00:13:25.526 fused_ordering(638) 00:13:25.526 fused_ordering(639) 00:13:25.526 fused_ordering(640) 00:13:25.526 fused_ordering(641) 00:13:25.526 fused_ordering(642) 00:13:25.526 fused_ordering(643) 00:13:25.526 fused_ordering(644) 00:13:25.526 fused_ordering(645) 00:13:25.526 fused_ordering(646) 00:13:25.526 fused_ordering(647) 00:13:25.526 fused_ordering(648) 00:13:25.526 fused_ordering(649) 00:13:25.526 fused_ordering(650) 00:13:25.526 fused_ordering(651) 00:13:25.526 fused_ordering(652) 00:13:25.526 fused_ordering(653) 00:13:25.526 fused_ordering(654) 00:13:25.526 fused_ordering(655) 00:13:25.526 fused_ordering(656) 00:13:25.526 fused_ordering(657) 00:13:25.526 fused_ordering(658) 00:13:25.526 fused_ordering(659) 00:13:25.526 fused_ordering(660) 00:13:25.526 fused_ordering(661) 00:13:25.526 fused_ordering(662) 00:13:25.526 fused_ordering(663) 00:13:25.526 fused_ordering(664) 00:13:25.526 fused_ordering(665) 00:13:25.526 fused_ordering(666) 00:13:25.526 fused_ordering(667) 00:13:25.526 fused_ordering(668) 00:13:25.526 fused_ordering(669) 00:13:25.526 fused_ordering(670) 00:13:25.526 fused_ordering(671) 00:13:25.526 fused_ordering(672) 00:13:25.526 fused_ordering(673) 00:13:25.526 fused_ordering(674) 00:13:25.526 fused_ordering(675) 00:13:25.526 fused_ordering(676) 00:13:25.526 fused_ordering(677) 00:13:25.526 fused_ordering(678) 00:13:25.526 fused_ordering(679) 00:13:25.526 fused_ordering(680) 00:13:25.526 fused_ordering(681) 00:13:25.526 fused_ordering(682) 00:13:25.526 fused_ordering(683) 00:13:25.526 fused_ordering(684) 00:13:25.526 fused_ordering(685) 00:13:25.526 fused_ordering(686) 00:13:25.526 fused_ordering(687) 00:13:25.526 fused_ordering(688) 00:13:25.526 fused_ordering(689) 00:13:25.526 fused_ordering(690) 00:13:25.526 fused_ordering(691) 00:13:25.526 fused_ordering(692) 00:13:25.526 fused_ordering(693) 00:13:25.526 fused_ordering(694) 00:13:25.526 fused_ordering(695) 00:13:25.526 fused_ordering(696) 00:13:25.526 fused_ordering(697) 00:13:25.526 fused_ordering(698) 00:13:25.526 fused_ordering(699) 00:13:25.526 fused_ordering(700) 00:13:25.526 fused_ordering(701) 00:13:25.526 fused_ordering(702) 00:13:25.526 fused_ordering(703) 00:13:25.526 fused_ordering(704) 00:13:25.526 fused_ordering(705) 00:13:25.526 fused_ordering(706) 00:13:25.526 fused_ordering(707) 00:13:25.526 fused_ordering(708) 00:13:25.526 fused_ordering(709) 00:13:25.526 fused_ordering(710) 00:13:25.526 fused_ordering(711) 00:13:25.526 fused_ordering(712) 00:13:25.526 fused_ordering(713) 00:13:25.526 fused_ordering(714) 00:13:25.526 fused_ordering(715) 00:13:25.526 fused_ordering(716) 00:13:25.526 fused_ordering(717) 00:13:25.526 fused_ordering(718) 00:13:25.526 fused_ordering(719) 00:13:25.526 fused_ordering(720) 00:13:25.526 fused_ordering(721) 00:13:25.526 fused_ordering(722) 00:13:25.526 fused_ordering(723) 00:13:25.526 fused_ordering(724) 00:13:25.526 fused_ordering(725) 00:13:25.526 fused_ordering(726) 00:13:25.526 fused_ordering(727) 00:13:25.526 fused_ordering(728) 00:13:25.526 fused_ordering(729) 00:13:25.526 fused_ordering(730) 00:13:25.526 fused_ordering(731) 00:13:25.526 fused_ordering(732) 00:13:25.526 fused_ordering(733) 00:13:25.526 fused_ordering(734) 00:13:25.526 fused_ordering(735) 00:13:25.526 fused_ordering(736) 00:13:25.526 fused_ordering(737) 00:13:25.526 fused_ordering(738) 00:13:25.526 fused_ordering(739) 00:13:25.526 fused_ordering(740) 00:13:25.526 fused_ordering(741) 00:13:25.526 fused_ordering(742) 00:13:25.526 fused_ordering(743) 00:13:25.526 fused_ordering(744) 00:13:25.526 fused_ordering(745) 00:13:25.526 fused_ordering(746) 00:13:25.526 fused_ordering(747) 00:13:25.526 fused_ordering(748) 00:13:25.526 fused_ordering(749) 00:13:25.526 fused_ordering(750) 00:13:25.526 fused_ordering(751) 00:13:25.526 fused_ordering(752) 00:13:25.526 fused_ordering(753) 00:13:25.526 fused_ordering(754) 00:13:25.526 fused_ordering(755) 00:13:25.526 fused_ordering(756) 00:13:25.526 fused_ordering(757) 00:13:25.526 fused_ordering(758) 00:13:25.526 fused_ordering(759) 00:13:25.526 fused_ordering(760) 00:13:25.526 fused_ordering(761) 00:13:25.526 fused_ordering(762) 00:13:25.526 fused_ordering(763) 00:13:25.526 fused_ordering(764) 00:13:25.526 fused_ordering(765) 00:13:25.526 fused_ordering(766) 00:13:25.526 fused_ordering(767) 00:13:25.526 fused_ordering(768) 00:13:25.526 fused_ordering(769) 00:13:25.526 fused_ordering(770) 00:13:25.526 fused_ordering(771) 00:13:25.526 fused_ordering(772) 00:13:25.526 fused_ordering(773) 00:13:25.526 fused_ordering(774) 00:13:25.526 fused_ordering(775) 00:13:25.526 fused_ordering(776) 00:13:25.526 fused_ordering(777) 00:13:25.526 fused_ordering(778) 00:13:25.526 fused_ordering(779) 00:13:25.526 fused_ordering(780) 00:13:25.526 fused_ordering(781) 00:13:25.526 fused_ordering(782) 00:13:25.526 fused_ordering(783) 00:13:25.526 fused_ordering(784) 00:13:25.526 fused_ordering(785) 00:13:25.526 fused_ordering(786) 00:13:25.526 fused_ordering(787) 00:13:25.526 fused_ordering(788) 00:13:25.526 fused_ordering(789) 00:13:25.526 fused_ordering(790) 00:13:25.526 fused_ordering(791) 00:13:25.526 fused_ordering(792) 00:13:25.526 fused_ordering(793) 00:13:25.526 fused_ordering(794) 00:13:25.526 fused_ordering(795) 00:13:25.526 fused_ordering(796) 00:13:25.526 fused_ordering(797) 00:13:25.526 fused_ordering(798) 00:13:25.526 fused_ordering(799) 00:13:25.526 fused_ordering(800) 00:13:25.526 fused_ordering(801) 00:13:25.526 fused_ordering(802) 00:13:25.526 fused_ordering(803) 00:13:25.526 fused_ordering(804) 00:13:25.526 fused_ordering(805) 00:13:25.526 fused_ordering(806) 00:13:25.526 fused_ordering(807) 00:13:25.526 fused_ordering(808) 00:13:25.526 fused_ordering(809) 00:13:25.526 fused_ordering(810) 00:13:25.526 fused_ordering(811) 00:13:25.526 fused_ordering(812) 00:13:25.526 fused_ordering(813) 00:13:25.526 fused_ordering(814) 00:13:25.526 fused_ordering(815) 00:13:25.526 fused_ordering(816) 00:13:25.526 fused_ordering(817) 00:13:25.526 fused_ordering(818) 00:13:25.526 fused_ordering(819) 00:13:25.526 fused_ordering(820) 00:13:26.096 fused_ordering(821) 00:13:26.096 fused_ordering(822) 00:13:26.096 fused_ordering(823) 00:13:26.096 fused_ordering(824) 00:13:26.096 fused_ordering(825) 00:13:26.096 fused_ordering(826) 00:13:26.096 fused_ordering(827) 00:13:26.096 fused_ordering(828) 00:13:26.096 fused_ordering(829) 00:13:26.096 fused_ordering(830) 00:13:26.096 fused_ordering(831) 00:13:26.096 fused_ordering(832) 00:13:26.096 fused_ordering(833) 00:13:26.096 fused_ordering(834) 00:13:26.096 fused_ordering(835) 00:13:26.096 fused_ordering(836) 00:13:26.096 fused_ordering(837) 00:13:26.096 fused_ordering(838) 00:13:26.096 fused_ordering(839) 00:13:26.096 fused_ordering(840) 00:13:26.096 fused_ordering(841) 00:13:26.096 fused_ordering(842) 00:13:26.096 fused_ordering(843) 00:13:26.096 fused_ordering(844) 00:13:26.096 fused_ordering(845) 00:13:26.096 fused_ordering(846) 00:13:26.096 fused_ordering(847) 00:13:26.096 fused_ordering(848) 00:13:26.096 fused_ordering(849) 00:13:26.096 fused_ordering(850) 00:13:26.096 fused_ordering(851) 00:13:26.096 fused_ordering(852) 00:13:26.096 fused_ordering(853) 00:13:26.096 fused_ordering(854) 00:13:26.096 fused_ordering(855) 00:13:26.096 fused_ordering(856) 00:13:26.096 fused_ordering(857) 00:13:26.096 fused_ordering(858) 00:13:26.096 fused_ordering(859) 00:13:26.096 fused_ordering(860) 00:13:26.096 fused_ordering(861) 00:13:26.096 fused_ordering(862) 00:13:26.096 fused_ordering(863) 00:13:26.096 fused_ordering(864) 00:13:26.096 fused_ordering(865) 00:13:26.096 fused_ordering(866) 00:13:26.096 fused_ordering(867) 00:13:26.096 fused_ordering(868) 00:13:26.096 fused_ordering(869) 00:13:26.096 fused_ordering(870) 00:13:26.096 fused_ordering(871) 00:13:26.096 fused_ordering(872) 00:13:26.096 fused_ordering(873) 00:13:26.096 fused_ordering(874) 00:13:26.096 fused_ordering(875) 00:13:26.096 fused_ordering(876) 00:13:26.096 fused_ordering(877) 00:13:26.096 fused_ordering(878) 00:13:26.096 fused_ordering(879) 00:13:26.096 fused_ordering(880) 00:13:26.096 fused_ordering(881) 00:13:26.096 fused_ordering(882) 00:13:26.096 fused_ordering(883) 00:13:26.096 fused_ordering(884) 00:13:26.096 fused_ordering(885) 00:13:26.096 fused_ordering(886) 00:13:26.096 fused_ordering(887) 00:13:26.096 fused_ordering(888) 00:13:26.096 fused_ordering(889) 00:13:26.096 fused_ordering(890) 00:13:26.096 fused_ordering(891) 00:13:26.096 fused_ordering(892) 00:13:26.096 fused_ordering(893) 00:13:26.096 fused_ordering(894) 00:13:26.096 fused_ordering(895) 00:13:26.096 fused_ordering(896) 00:13:26.096 fused_ordering(897) 00:13:26.096 fused_ordering(898) 00:13:26.096 fused_ordering(899) 00:13:26.096 fused_ordering(900) 00:13:26.096 fused_ordering(901) 00:13:26.096 fused_ordering(902) 00:13:26.096 fused_ordering(903) 00:13:26.096 fused_ordering(904) 00:13:26.096 fused_ordering(905) 00:13:26.096 fused_ordering(906) 00:13:26.096 fused_ordering(907) 00:13:26.096 fused_ordering(908) 00:13:26.096 fused_ordering(909) 00:13:26.096 fused_ordering(910) 00:13:26.096 fused_ordering(911) 00:13:26.096 fused_ordering(912) 00:13:26.096 fused_ordering(913) 00:13:26.096 fused_ordering(914) 00:13:26.096 fused_ordering(915) 00:13:26.096 fused_ordering(916) 00:13:26.096 fused_ordering(917) 00:13:26.096 fused_ordering(918) 00:13:26.096 fused_ordering(919) 00:13:26.096 fused_ordering(920) 00:13:26.096 fused_ordering(921) 00:13:26.096 fused_ordering(922) 00:13:26.096 fused_ordering(923) 00:13:26.096 fused_ordering(924) 00:13:26.097 fused_ordering(925) 00:13:26.097 fused_ordering(926) 00:13:26.097 fused_ordering(927) 00:13:26.097 fused_ordering(928) 00:13:26.097 fused_ordering(929) 00:13:26.097 fused_ordering(930) 00:13:26.097 fused_ordering(931) 00:13:26.097 fused_ordering(932) 00:13:26.097 fused_ordering(933) 00:13:26.097 fused_ordering(934) 00:13:26.097 fused_ordering(935) 00:13:26.097 fused_ordering(936) 00:13:26.097 fused_ordering(937) 00:13:26.097 fused_ordering(938) 00:13:26.097 fused_ordering(939) 00:13:26.097 fused_ordering(940) 00:13:26.097 fused_ordering(941) 00:13:26.097 fused_ordering(942) 00:13:26.097 fused_ordering(943) 00:13:26.097 fused_ordering(944) 00:13:26.097 fused_ordering(945) 00:13:26.097 fused_ordering(946) 00:13:26.097 fused_ordering(947) 00:13:26.097 fused_ordering(948) 00:13:26.097 fused_ordering(949) 00:13:26.097 fused_ordering(950) 00:13:26.097 fused_ordering(951) 00:13:26.097 fused_ordering(952) 00:13:26.097 fused_ordering(953) 00:13:26.097 fused_ordering(954) 00:13:26.097 fused_ordering(955) 00:13:26.097 fused_ordering(956) 00:13:26.097 fused_ordering(957) 00:13:26.097 fused_ordering(958) 00:13:26.097 fused_ordering(959) 00:13:26.097 fused_ordering(960) 00:13:26.097 fused_ordering(961) 00:13:26.097 fused_ordering(962) 00:13:26.097 fused_ordering(963) 00:13:26.097 fused_ordering(964) 00:13:26.097 fused_ordering(965) 00:13:26.097 fused_ordering(966) 00:13:26.097 fused_ordering(967) 00:13:26.097 fused_ordering(968) 00:13:26.097 fused_ordering(969) 00:13:26.097 fused_ordering(970) 00:13:26.097 fused_ordering(971) 00:13:26.097 fused_ordering(972) 00:13:26.097 fused_ordering(973) 00:13:26.097 fused_ordering(974) 00:13:26.097 fused_ordering(975) 00:13:26.097 fused_ordering(976) 00:13:26.097 fused_ordering(977) 00:13:26.097 fused_ordering(978) 00:13:26.097 fused_ordering(979) 00:13:26.097 fused_ordering(980) 00:13:26.097 fused_ordering(981) 00:13:26.097 fused_ordering(982) 00:13:26.097 fused_ordering(983) 00:13:26.097 fused_ordering(984) 00:13:26.097 fused_ordering(985) 00:13:26.097 fused_ordering(986) 00:13:26.097 fused_ordering(987) 00:13:26.097 fused_ordering(988) 00:13:26.097 fused_ordering(989) 00:13:26.097 fused_ordering(990) 00:13:26.097 fused_ordering(991) 00:13:26.097 fused_ordering(992) 00:13:26.097 fused_ordering(993) 00:13:26.097 fused_ordering(994) 00:13:26.097 fused_ordering(995) 00:13:26.097 fused_ordering(996) 00:13:26.097 fused_ordering(997) 00:13:26.097 fused_ordering(998) 00:13:26.097 fused_ordering(999) 00:13:26.097 fused_ordering(1000) 00:13:26.097 fused_ordering(1001) 00:13:26.097 fused_ordering(1002) 00:13:26.097 fused_ordering(1003) 00:13:26.097 fused_ordering(1004) 00:13:26.097 fused_ordering(1005) 00:13:26.097 fused_ordering(1006) 00:13:26.097 fused_ordering(1007) 00:13:26.097 fused_ordering(1008) 00:13:26.097 fused_ordering(1009) 00:13:26.097 fused_ordering(1010) 00:13:26.097 fused_ordering(1011) 00:13:26.097 fused_ordering(1012) 00:13:26.097 fused_ordering(1013) 00:13:26.097 fused_ordering(1014) 00:13:26.097 fused_ordering(1015) 00:13:26.097 fused_ordering(1016) 00:13:26.097 fused_ordering(1017) 00:13:26.097 fused_ordering(1018) 00:13:26.097 fused_ordering(1019) 00:13:26.097 fused_ordering(1020) 00:13:26.097 fused_ordering(1021) 00:13:26.097 fused_ordering(1022) 00:13:26.097 fused_ordering(1023) 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.097 rmmod nvme_tcp 00:13:26.097 rmmod nvme_fabrics 00:13:26.097 rmmod nvme_keyring 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1063176 ']' 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1063176 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1063176 ']' 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1063176 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.097 09:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1063176 00:13:26.097 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:26.097 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:26.097 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1063176' 00:13:26.097 killing process with pid 1063176 00:13:26.097 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1063176 00:13:26.097 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1063176 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.357 09:15:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:28.265 00:13:28.265 real 0m10.809s 00:13:28.265 user 0m5.121s 00:13:28.265 sys 0m5.922s 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:28.265 ************************************ 00:13:28.265 END TEST nvmf_fused_ordering 00:13:28.265 ************************************ 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.265 ************************************ 00:13:28.265 START TEST nvmf_ns_masking 00:13:28.265 ************************************ 00:13:28.265 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:28.526 * Looking for test storage... 00:13:28.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:28.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.526 --rc genhtml_branch_coverage=1 00:13:28.526 --rc genhtml_function_coverage=1 00:13:28.526 --rc genhtml_legend=1 00:13:28.526 --rc geninfo_all_blocks=1 00:13:28.526 --rc geninfo_unexecuted_blocks=1 00:13:28.526 00:13:28.526 ' 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:28.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.526 --rc genhtml_branch_coverage=1 00:13:28.526 --rc genhtml_function_coverage=1 00:13:28.526 --rc genhtml_legend=1 00:13:28.526 --rc geninfo_all_blocks=1 00:13:28.526 --rc geninfo_unexecuted_blocks=1 00:13:28.526 00:13:28.526 ' 00:13:28.526 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:28.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.526 --rc genhtml_branch_coverage=1 00:13:28.526 --rc genhtml_function_coverage=1 00:13:28.526 --rc genhtml_legend=1 00:13:28.526 --rc geninfo_all_blocks=1 00:13:28.526 --rc geninfo_unexecuted_blocks=1 00:13:28.526 00:13:28.527 ' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:28.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.527 --rc genhtml_branch_coverage=1 00:13:28.527 --rc genhtml_function_coverage=1 00:13:28.527 --rc genhtml_legend=1 00:13:28.527 --rc geninfo_all_blocks=1 00:13:28.527 --rc geninfo_unexecuted_blocks=1 00:13:28.527 00:13:28.527 ' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7d179ec5-c02e-499e-93b8-8a113c980b7f 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=67799985-a9ab-4164-80b6-d44abdb10f56 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=27d62be3-40d9-4305-a819-629ae5d7f4c5 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:28.527 09:15:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.104 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:35.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:35.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:35.105 Found net devices under 0000:86:00.0: cvl_0_0 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:35.105 Found net devices under 0000:86:00.1: cvl_0_1 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.105 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:13:35.106 00:13:35.106 --- 10.0.0.2 ping statistics --- 00:13:35.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.106 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:13:35.106 00:13:35.106 --- 10.0.0.1 ping statistics --- 00:13:35.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.106 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1067178 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1067178 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1067178 ']' 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.106 [2024-11-19 09:15:35.563091] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:13:35.106 [2024-11-19 09:15:35.563142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.106 [2024-11-19 09:15:35.644705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.106 [2024-11-19 09:15:35.686049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.106 [2024-11-19 09:15:35.686084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.106 [2024-11-19 09:15:35.686091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.106 [2024-11-19 09:15:35.686098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.106 [2024-11-19 09:15:35.686103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.106 [2024-11-19 09:15:35.686646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.106 09:15:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:35.106 [2024-11-19 09:15:35.985877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.106 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:35.106 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:35.106 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:35.367 Malloc1 00:13:35.367 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:35.367 Malloc2 00:13:35.367 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.626 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:35.885 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.144 [2024-11-19 09:15:36.971174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.144 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:36.144 09:15:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27d62be3-40d9-4305-a819-629ae5d7f4c5 -a 10.0.0.2 -s 4420 -i 4 00:13:36.144 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.144 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:36.144 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.144 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:36.144 09:15:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.680 [ 0]:0x1 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55b7ada770d843568570b803a2cbf158 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55b7ada770d843568570b803a2cbf158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.680 [ 0]:0x1 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55b7ada770d843568570b803a2cbf158 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55b7ada770d843568570b803a2cbf158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.680 [ 1]:0x2 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.680 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.940 09:15:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:39.199 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:39.199 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27d62be3-40d9-4305-a819-629ae5d7f4c5 -a 10.0.0.2 -s 4420 -i 4 00:13:39.458 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:39.458 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:39.458 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.458 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:13:39.458 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:13:39.458 09:15:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.364 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.702 [ 0]:0x2 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.702 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.007 [ 0]:0x1 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55b7ada770d843568570b803a2cbf158 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55b7ada770d843568570b803a2cbf158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.007 [ 1]:0x2 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.007 09:15:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:42.310 [ 0]:0x2 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.310 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 27d62be3-40d9-4305-a819-629ae5d7f4c5 -a 10.0.0.2 -s 4420 -i 4 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:42.592 09:15:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.123 [ 0]:0x1 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=55b7ada770d843568570b803a2cbf158 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 55b7ada770d843568570b803a2cbf158 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.123 [ 1]:0x2 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.123 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.124 09:15:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.124 [ 0]:0x2 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.124 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.383 [2024-11-19 09:15:46.265548] nvmf_rpc.c:1892:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:45.383 request: 00:13:45.383 { 00:13:45.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.383 "nsid": 2, 00:13:45.383 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.383 "method": "nvmf_ns_remove_host", 00:13:45.383 "req_id": 1 00:13:45.383 } 00:13:45.383 Got JSON-RPC error response 00:13:45.383 response: 00:13:45.383 { 00:13:45.383 "code": -32602, 00:13:45.383 "message": "Invalid parameters" 00:13:45.383 } 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.383 [ 0]:0x2 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5f911222fce94736ab289425e38088f9 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5f911222fce94736ab289425e38088f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:45.383 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1069059 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1069059 /var/tmp/host.sock 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1069059 ']' 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:45.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.642 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.642 [2024-11-19 09:15:46.513344] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:13:45.642 [2024-11-19 09:15:46.513394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069059 ] 00:13:45.642 [2024-11-19 09:15:46.591316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.642 [2024-11-19 09:15:46.633102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.902 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:45.902 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:45.902 09:15:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.160 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.419 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7d179ec5-c02e-499e-93b8-8a113c980b7f 00:13:46.419 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.419 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7D179EC5C02E499E93B88A113C980B7F -i 00:13:46.419 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 67799985-a9ab-4164-80b6-d44abdb10f56 00:13:46.419 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.419 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 67799985A9AB416480B6D44ABDB10F56 -i 00:13:46.677 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.936 09:15:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:47.194 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:47.195 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:47.453 nvme0n1 00:13:47.453 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:47.453 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:47.712 nvme1n2 00:13:47.712 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:47.712 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:47.712 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:47.712 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:47.712 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:47.971 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:47.971 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:47.971 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:47.971 09:15:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:48.230 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7d179ec5-c02e-499e-93b8-8a113c980b7f == \7\d\1\7\9\e\c\5\-\c\0\2\e\-\4\9\9\e\-\9\3\b\8\-\8\a\1\1\3\c\9\8\0\b\7\f ]] 00:13:48.231 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:48.231 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:48.231 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:48.489 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 67799985-a9ab-4164-80b6-d44abdb10f56 == \6\7\7\9\9\9\8\5\-\a\9\a\b\-\4\1\6\4\-\8\0\b\6\-\d\4\4\a\b\d\b\1\0\f\5\6 ]] 00:13:48.489 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.489 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7d179ec5-c02e-499e-93b8-8a113c980b7f 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7D179EC5C02E499E93B88A113C980B7F 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7D179EC5C02E499E93B88A113C980B7F 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:48.749 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7D179EC5C02E499E93B88A113C980B7F 00:13:49.008 [2024-11-19 09:15:49.871489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:49.008 [2024-11-19 09:15:49.871519] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:49.008 [2024-11-19 09:15:49.871528] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.008 request: 00:13:49.008 { 00:13:49.008 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.008 "namespace": { 00:13:49.008 "bdev_name": "invalid", 00:13:49.008 "nsid": 1, 00:13:49.008 "nguid": "7D179EC5C02E499E93B88A113C980B7F", 00:13:49.008 "no_auto_visible": false 00:13:49.008 }, 00:13:49.008 "method": "nvmf_subsystem_add_ns", 00:13:49.008 "req_id": 1 00:13:49.008 } 00:13:49.008 Got JSON-RPC error response 00:13:49.008 response: 00:13:49.008 { 00:13:49.008 "code": -32602, 00:13:49.008 "message": "Invalid parameters" 00:13:49.008 } 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7d179ec5-c02e-499e-93b8-8a113c980b7f 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:49.008 09:15:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7D179EC5C02E499E93B88A113C980B7F -i 00:13:49.267 09:15:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:51.172 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:51.172 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:51.172 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1069059 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1069059 ']' 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1069059 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1069059 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1069059' 00:13:51.431 killing process with pid 1069059 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1069059 00:13:51.431 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1069059 00:13:51.690 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.949 rmmod nvme_tcp 00:13:51.949 rmmod nvme_fabrics 00:13:51.949 rmmod nvme_keyring 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1067178 ']' 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1067178 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1067178 ']' 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1067178 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.949 09:15:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1067178 00:13:52.208 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1067178' 00:13:52.209 killing process with pid 1067178 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1067178 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1067178 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.209 09:15:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.747 00:13:54.747 real 0m25.969s 00:13:54.747 user 0m30.938s 00:13:54.747 sys 0m7.226s 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 ************************************ 00:13:54.747 END TEST nvmf_ns_masking 00:13:54.747 ************************************ 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.747 ************************************ 00:13:54.747 START TEST nvmf_nvme_cli 00:13:54.747 ************************************ 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:54.747 * Looking for test storage... 00:13:54.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.747 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.748 --rc genhtml_branch_coverage=1 00:13:54.748 --rc genhtml_function_coverage=1 00:13:54.748 --rc genhtml_legend=1 00:13:54.748 --rc geninfo_all_blocks=1 00:13:54.748 --rc geninfo_unexecuted_blocks=1 00:13:54.748 00:13:54.748 ' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.748 --rc genhtml_branch_coverage=1 00:13:54.748 --rc genhtml_function_coverage=1 00:13:54.748 --rc genhtml_legend=1 00:13:54.748 --rc geninfo_all_blocks=1 00:13:54.748 --rc geninfo_unexecuted_blocks=1 00:13:54.748 00:13:54.748 ' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.748 --rc genhtml_branch_coverage=1 00:13:54.748 --rc genhtml_function_coverage=1 00:13:54.748 --rc genhtml_legend=1 00:13:54.748 --rc geninfo_all_blocks=1 00:13:54.748 --rc geninfo_unexecuted_blocks=1 00:13:54.748 00:13:54.748 ' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.748 --rc genhtml_branch_coverage=1 00:13:54.748 --rc genhtml_function_coverage=1 00:13:54.748 --rc genhtml_legend=1 00:13:54.748 --rc geninfo_all_blocks=1 00:13:54.748 --rc geninfo_unexecuted_blocks=1 00:13:54.748 00:13:54.748 ' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.748 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.749 09:15:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:01.319 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:01.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:01.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:01.320 Found net devices under 0000:86:00.0: cvl_0_0 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:01.320 Found net devices under 0000:86:00.1: cvl_0_1 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:01.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:14:01.320 00:14:01.320 --- 10.0.0.2 ping statistics --- 00:14:01.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.320 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:14:01.320 00:14:01.320 --- 10.0.0.1 ping statistics --- 00:14:01.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.320 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:01.320 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1073688 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1073688 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1073688 ']' 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.321 09:16:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.321 [2024-11-19 09:16:01.553242] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:14:01.321 [2024-11-19 09:16:01.553295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.321 [2024-11-19 09:16:01.632276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.321 [2024-11-19 09:16:01.675456] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.321 [2024-11-19 09:16:01.675496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.321 [2024-11-19 09:16:01.675503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.321 [2024-11-19 09:16:01.675509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.321 [2024-11-19 09:16:01.675514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.321 [2024-11-19 09:16:01.677015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.321 [2024-11-19 09:16:01.677122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.321 [2024-11-19 09:16:01.677156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.321 [2024-11-19 09:16:01.677157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 [2024-11-19 09:16:02.440534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 Malloc0 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 Malloc1 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 [2024-11-19 09:16:02.530913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.580 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:01.840 00:14:01.840 Discovery Log Number of Records 2, Generation counter 2 00:14:01.840 =====Discovery Log Entry 0====== 00:14:01.840 trtype: tcp 00:14:01.840 adrfam: ipv4 00:14:01.840 subtype: current discovery subsystem 00:14:01.840 treq: not required 00:14:01.840 portid: 0 00:14:01.840 trsvcid: 4420 00:14:01.840 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:01.840 traddr: 10.0.0.2 00:14:01.840 eflags: explicit discovery connections, duplicate discovery information 00:14:01.840 sectype: none 00:14:01.840 =====Discovery Log Entry 1====== 00:14:01.840 trtype: tcp 00:14:01.840 adrfam: ipv4 00:14:01.840 subtype: nvme subsystem 00:14:01.840 treq: not required 00:14:01.840 portid: 0 00:14:01.840 trsvcid: 4420 00:14:01.840 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:01.840 traddr: 10.0.0.2 00:14:01.840 eflags: none 00:14:01.840 sectype: none 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:01.840 09:16:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:03.218 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:03.218 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:03.218 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.218 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:03.218 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:03.218 09:16:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:05.124 /dev/nvme0n2 ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:05.124 09:16:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.124 rmmod nvme_tcp 00:14:05.124 rmmod nvme_fabrics 00:14:05.124 rmmod nvme_keyring 00:14:05.124 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1073688 ']' 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1073688 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1073688 ']' 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1073688 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1073688 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1073688' 00:14:05.384 killing process with pid 1073688 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1073688 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1073688 00:14:05.384 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.643 09:16:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:07.549 00:14:07.549 real 0m13.157s 00:14:07.549 user 0m20.851s 00:14:07.549 sys 0m5.113s 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.549 ************************************ 00:14:07.549 END TEST nvmf_nvme_cli 00:14:07.549 ************************************ 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:07.549 ************************************ 00:14:07.549 START TEST nvmf_vfio_user 00:14:07.549 ************************************ 00:14:07.549 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:07.810 * Looking for test storage... 00:14:07.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.810 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:07.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.810 --rc genhtml_branch_coverage=1 00:14:07.810 --rc genhtml_function_coverage=1 00:14:07.810 --rc genhtml_legend=1 00:14:07.810 --rc geninfo_all_blocks=1 00:14:07.811 --rc geninfo_unexecuted_blocks=1 00:14:07.811 00:14:07.811 ' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:07.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.811 --rc genhtml_branch_coverage=1 00:14:07.811 --rc genhtml_function_coverage=1 00:14:07.811 --rc genhtml_legend=1 00:14:07.811 --rc geninfo_all_blocks=1 00:14:07.811 --rc geninfo_unexecuted_blocks=1 00:14:07.811 00:14:07.811 ' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:07.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.811 --rc genhtml_branch_coverage=1 00:14:07.811 --rc genhtml_function_coverage=1 00:14:07.811 --rc genhtml_legend=1 00:14:07.811 --rc geninfo_all_blocks=1 00:14:07.811 --rc geninfo_unexecuted_blocks=1 00:14:07.811 00:14:07.811 ' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:07.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.811 --rc genhtml_branch_coverage=1 00:14:07.811 --rc genhtml_function_coverage=1 00:14:07.811 --rc genhtml_legend=1 00:14:07.811 --rc geninfo_all_blocks=1 00:14:07.811 --rc geninfo_unexecuted_blocks=1 00:14:07.811 00:14:07.811 ' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1074983 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1074983' 00:14:07.811 Process pid: 1074983 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1074983 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1074983 ']' 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.811 09:16:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:07.811 [2024-11-19 09:16:08.856140] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:14:07.811 [2024-11-19 09:16:08.856189] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.071 [2024-11-19 09:16:08.931163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.071 [2024-11-19 09:16:08.977746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.071 [2024-11-19 09:16:08.977777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.071 [2024-11-19 09:16:08.977785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.071 [2024-11-19 09:16:08.977793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.071 [2024-11-19 09:16:08.977799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.071 [2024-11-19 09:16:08.979248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.071 [2024-11-19 09:16:08.979341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.071 [2024-11-19 09:16:08.979447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.071 [2024-11-19 09:16:08.979448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.071 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:08.071 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:08.071 09:16:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:09.447 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:09.447 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:09.447 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:09.447 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.447 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:09.447 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:09.706 Malloc1 00:14:09.707 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:09.707 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:09.966 09:16:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:10.225 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:10.225 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:10.225 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:10.483 Malloc2 00:14:10.484 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:10.743 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:10.743 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:11.002 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:11.002 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:11.002 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.002 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:11.002 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:11.002 09:16:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:11.002 [2024-11-19 09:16:12.004285] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:14:11.002 [2024-11-19 09:16:12.004323] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075611 ] 00:14:11.002 [2024-11-19 09:16:12.045902] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:11.002 [2024-11-19 09:16:12.052251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:11.002 [2024-11-19 09:16:12.052273] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc054940000 00:14:11.002 [2024-11-19 09:16:12.053249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.002 [2024-11-19 09:16:12.054255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.002 [2024-11-19 09:16:12.055259] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.002 [2024-11-19 09:16:12.056268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:11.002 [2024-11-19 09:16:12.057272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:11.002 [2024-11-19 09:16:12.058282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.263 [2024-11-19 09:16:12.059288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:11.263 [2024-11-19 09:16:12.060295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:11.263 [2024-11-19 09:16:12.061304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:11.263 [2024-11-19 09:16:12.061317] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc054935000 00:14:11.263 [2024-11-19 09:16:12.062261] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:11.263 [2024-11-19 09:16:12.070877] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:11.263 [2024-11-19 09:16:12.070906] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:11.263 [2024-11-19 09:16:12.075394] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:11.263 [2024-11-19 09:16:12.075430] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:11.263 [2024-11-19 09:16:12.075499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:11.263 [2024-11-19 09:16:12.075515] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:11.263 [2024-11-19 09:16:12.075521] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:11.263 [2024-11-19 09:16:12.076390] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:11.263 [2024-11-19 09:16:12.076400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:11.263 [2024-11-19 09:16:12.076406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:11.263 [2024-11-19 09:16:12.077393] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:11.263 [2024-11-19 09:16:12.077402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:11.263 [2024-11-19 09:16:12.077409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:11.263 [2024-11-19 09:16:12.078399] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:11.263 [2024-11-19 09:16:12.078411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:11.263 [2024-11-19 09:16:12.079401] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:11.263 [2024-11-19 09:16:12.079409] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:11.263 [2024-11-19 09:16:12.079415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:11.263 [2024-11-19 09:16:12.079421] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:11.263 [2024-11-19 09:16:12.079529] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:11.263 [2024-11-19 09:16:12.079534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:11.263 [2024-11-19 09:16:12.079539] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:11.263 [2024-11-19 09:16:12.080957] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:11.263 [2024-11-19 09:16:12.081413] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:11.263 [2024-11-19 09:16:12.082417] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:11.263 [2024-11-19 09:16:12.083414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:11.263 [2024-11-19 09:16:12.083492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:11.263 [2024-11-19 09:16:12.084423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:11.263 [2024-11-19 09:16:12.084431] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:11.263 [2024-11-19 09:16:12.084435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:11.263 [2024-11-19 09:16:12.084464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084480] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:11.263 [2024-11-19 09:16:12.084485] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.263 [2024-11-19 09:16:12.084488] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.263 [2024-11-19 09:16:12.084502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.263 [2024-11-19 09:16:12.084551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:11.263 [2024-11-19 09:16:12.084561] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:11.263 [2024-11-19 09:16:12.084565] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:11.263 [2024-11-19 09:16:12.084571] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:11.263 [2024-11-19 09:16:12.084576] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:11.263 [2024-11-19 09:16:12.084581] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:11.263 [2024-11-19 09:16:12.084586] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:11.263 [2024-11-19 09:16:12.084591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:11.263 [2024-11-19 09:16:12.084618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:11.263 [2024-11-19 09:16:12.084630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.263 [2024-11-19 09:16:12.084638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.263 [2024-11-19 09:16:12.084646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.263 [2024-11-19 09:16:12.084653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:11.263 [2024-11-19 09:16:12.084657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084663] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:11.263 [2024-11-19 09:16:12.084681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:11.263 [2024-11-19 09:16:12.084688] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:11.263 [2024-11-19 09:16:12.084693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:11.263 [2024-11-19 09:16:12.084721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:11.263 [2024-11-19 09:16:12.084771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:11.263 [2024-11-19 09:16:12.084787] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:11.264 [2024-11-19 09:16:12.084791] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:11.264 [2024-11-19 09:16:12.084794] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.264 [2024-11-19 09:16:12.084800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.084815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.084823] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:11.264 [2024-11-19 09:16:12.084831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084844] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:11.264 [2024-11-19 09:16:12.084848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.264 [2024-11-19 09:16:12.084851] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.264 [2024-11-19 09:16:12.084857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.084878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.084890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084903] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:11.264 [2024-11-19 09:16:12.084907] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.264 [2024-11-19 09:16:12.084910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.264 [2024-11-19 09:16:12.084915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.084927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.084934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084975] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:11.264 [2024-11-19 09:16:12.084979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:11.264 [2024-11-19 09:16:12.084984] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:11.264 [2024-11-19 09:16:12.085001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085079] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:11.264 [2024-11-19 09:16:12.085083] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:11.264 [2024-11-19 09:16:12.085087] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:11.264 [2024-11-19 09:16:12.085090] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:11.264 [2024-11-19 09:16:12.085092] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:11.264 [2024-11-19 09:16:12.085098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:11.264 [2024-11-19 09:16:12.085104] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:11.264 [2024-11-19 09:16:12.085108] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:11.264 [2024-11-19 09:16:12.085111] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.264 [2024-11-19 09:16:12.085117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085123] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:11.264 [2024-11-19 09:16:12.085126] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:11.264 [2024-11-19 09:16:12.085129] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.264 [2024-11-19 09:16:12.085135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085142] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:11.264 [2024-11-19 09:16:12.085147] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:11.264 [2024-11-19 09:16:12.085150] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:11.264 [2024-11-19 09:16:12.085155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:11.264 [2024-11-19 09:16:12.085164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:11.264 [2024-11-19 09:16:12.085190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:11.264 ===================================================== 00:14:11.264 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:11.264 ===================================================== 00:14:11.264 Controller Capabilities/Features 00:14:11.264 ================================ 00:14:11.264 Vendor ID: 4e58 00:14:11.264 Subsystem Vendor ID: 4e58 00:14:11.264 Serial Number: SPDK1 00:14:11.264 Model Number: SPDK bdev Controller 00:14:11.264 Firmware Version: 25.01 00:14:11.264 Recommended Arb Burst: 6 00:14:11.264 IEEE OUI Identifier: 8d 6b 50 00:14:11.264 Multi-path I/O 00:14:11.264 May have multiple subsystem ports: Yes 00:14:11.264 May have multiple controllers: Yes 00:14:11.264 Associated with SR-IOV VF: No 00:14:11.264 Max Data Transfer Size: 131072 00:14:11.264 Max Number of Namespaces: 32 00:14:11.264 Max Number of I/O Queues: 127 00:14:11.264 NVMe Specification Version (VS): 1.3 00:14:11.264 NVMe Specification Version (Identify): 1.3 00:14:11.264 Maximum Queue Entries: 256 00:14:11.264 Contiguous Queues Required: Yes 00:14:11.264 Arbitration Mechanisms Supported 00:14:11.264 Weighted Round Robin: Not Supported 00:14:11.264 Vendor Specific: Not Supported 00:14:11.264 Reset Timeout: 15000 ms 00:14:11.264 Doorbell Stride: 4 bytes 00:14:11.264 NVM Subsystem Reset: Not Supported 00:14:11.264 Command Sets Supported 00:14:11.264 NVM Command Set: Supported 00:14:11.264 Boot Partition: Not Supported 00:14:11.264 Memory Page Size Minimum: 4096 bytes 00:14:11.264 Memory Page Size Maximum: 4096 bytes 00:14:11.264 Persistent Memory Region: Not Supported 00:14:11.264 Optional Asynchronous Events Supported 00:14:11.264 Namespace Attribute Notices: Supported 00:14:11.264 Firmware Activation Notices: Not Supported 00:14:11.264 ANA Change Notices: Not Supported 00:14:11.264 PLE Aggregate Log Change Notices: Not Supported 00:14:11.264 LBA Status Info Alert Notices: Not Supported 00:14:11.264 EGE Aggregate Log Change Notices: Not Supported 00:14:11.264 Normal NVM Subsystem Shutdown event: Not Supported 00:14:11.264 Zone Descriptor Change Notices: Not Supported 00:14:11.264 Discovery Log Change Notices: Not Supported 00:14:11.264 Controller Attributes 00:14:11.264 128-bit Host Identifier: Supported 00:14:11.264 Non-Operational Permissive Mode: Not Supported 00:14:11.264 NVM Sets: Not Supported 00:14:11.264 Read Recovery Levels: Not Supported 00:14:11.264 Endurance Groups: Not Supported 00:14:11.264 Predictable Latency Mode: Not Supported 00:14:11.264 Traffic Based Keep ALive: Not Supported 00:14:11.264 Namespace Granularity: Not Supported 00:14:11.265 SQ Associations: Not Supported 00:14:11.265 UUID List: Not Supported 00:14:11.265 Multi-Domain Subsystem: Not Supported 00:14:11.265 Fixed Capacity Management: Not Supported 00:14:11.265 Variable Capacity Management: Not Supported 00:14:11.265 Delete Endurance Group: Not Supported 00:14:11.265 Delete NVM Set: Not Supported 00:14:11.265 Extended LBA Formats Supported: Not Supported 00:14:11.265 Flexible Data Placement Supported: Not Supported 00:14:11.265 00:14:11.265 Controller Memory Buffer Support 00:14:11.265 ================================ 00:14:11.265 Supported: No 00:14:11.265 00:14:11.265 Persistent Memory Region Support 00:14:11.265 ================================ 00:14:11.265 Supported: No 00:14:11.265 00:14:11.265 Admin Command Set Attributes 00:14:11.265 ============================ 00:14:11.265 Security Send/Receive: Not Supported 00:14:11.265 Format NVM: Not Supported 00:14:11.265 Firmware Activate/Download: Not Supported 00:14:11.265 Namespace Management: Not Supported 00:14:11.265 Device Self-Test: Not Supported 00:14:11.265 Directives: Not Supported 00:14:11.265 NVMe-MI: Not Supported 00:14:11.265 Virtualization Management: Not Supported 00:14:11.265 Doorbell Buffer Config: Not Supported 00:14:11.265 Get LBA Status Capability: Not Supported 00:14:11.265 Command & Feature Lockdown Capability: Not Supported 00:14:11.265 Abort Command Limit: 4 00:14:11.265 Async Event Request Limit: 4 00:14:11.265 Number of Firmware Slots: N/A 00:14:11.265 Firmware Slot 1 Read-Only: N/A 00:14:11.265 Firmware Activation Without Reset: N/A 00:14:11.265 Multiple Update Detection Support: N/A 00:14:11.265 Firmware Update Granularity: No Information Provided 00:14:11.265 Per-Namespace SMART Log: No 00:14:11.265 Asymmetric Namespace Access Log Page: Not Supported 00:14:11.265 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:11.265 Command Effects Log Page: Supported 00:14:11.265 Get Log Page Extended Data: Supported 00:14:11.265 Telemetry Log Pages: Not Supported 00:14:11.265 Persistent Event Log Pages: Not Supported 00:14:11.265 Supported Log Pages Log Page: May Support 00:14:11.265 Commands Supported & Effects Log Page: Not Supported 00:14:11.265 Feature Identifiers & Effects Log Page:May Support 00:14:11.265 NVMe-MI Commands & Effects Log Page: May Support 00:14:11.265 Data Area 4 for Telemetry Log: Not Supported 00:14:11.265 Error Log Page Entries Supported: 128 00:14:11.265 Keep Alive: Supported 00:14:11.265 Keep Alive Granularity: 10000 ms 00:14:11.265 00:14:11.265 NVM Command Set Attributes 00:14:11.265 ========================== 00:14:11.265 Submission Queue Entry Size 00:14:11.265 Max: 64 00:14:11.265 Min: 64 00:14:11.265 Completion Queue Entry Size 00:14:11.265 Max: 16 00:14:11.265 Min: 16 00:14:11.265 Number of Namespaces: 32 00:14:11.265 Compare Command: Supported 00:14:11.265 Write Uncorrectable Command: Not Supported 00:14:11.265 Dataset Management Command: Supported 00:14:11.265 Write Zeroes Command: Supported 00:14:11.265 Set Features Save Field: Not Supported 00:14:11.265 Reservations: Not Supported 00:14:11.265 Timestamp: Not Supported 00:14:11.265 Copy: Supported 00:14:11.265 Volatile Write Cache: Present 00:14:11.265 Atomic Write Unit (Normal): 1 00:14:11.265 Atomic Write Unit (PFail): 1 00:14:11.265 Atomic Compare & Write Unit: 1 00:14:11.265 Fused Compare & Write: Supported 00:14:11.265 Scatter-Gather List 00:14:11.265 SGL Command Set: Supported (Dword aligned) 00:14:11.265 SGL Keyed: Not Supported 00:14:11.265 SGL Bit Bucket Descriptor: Not Supported 00:14:11.265 SGL Metadata Pointer: Not Supported 00:14:11.265 Oversized SGL: Not Supported 00:14:11.265 SGL Metadata Address: Not Supported 00:14:11.265 SGL Offset: Not Supported 00:14:11.265 Transport SGL Data Block: Not Supported 00:14:11.265 Replay Protected Memory Block: Not Supported 00:14:11.265 00:14:11.265 Firmware Slot Information 00:14:11.265 ========================= 00:14:11.265 Active slot: 1 00:14:11.265 Slot 1 Firmware Revision: 25.01 00:14:11.265 00:14:11.265 00:14:11.265 Commands Supported and Effects 00:14:11.265 ============================== 00:14:11.265 Admin Commands 00:14:11.265 -------------- 00:14:11.265 Get Log Page (02h): Supported 00:14:11.265 Identify (06h): Supported 00:14:11.265 Abort (08h): Supported 00:14:11.265 Set Features (09h): Supported 00:14:11.265 Get Features (0Ah): Supported 00:14:11.265 Asynchronous Event Request (0Ch): Supported 00:14:11.265 Keep Alive (18h): Supported 00:14:11.265 I/O Commands 00:14:11.265 ------------ 00:14:11.265 Flush (00h): Supported LBA-Change 00:14:11.265 Write (01h): Supported LBA-Change 00:14:11.265 Read (02h): Supported 00:14:11.265 Compare (05h): Supported 00:14:11.265 Write Zeroes (08h): Supported LBA-Change 00:14:11.265 Dataset Management (09h): Supported LBA-Change 00:14:11.265 Copy (19h): Supported LBA-Change 00:14:11.265 00:14:11.265 Error Log 00:14:11.265 ========= 00:14:11.265 00:14:11.265 Arbitration 00:14:11.265 =========== 00:14:11.265 Arbitration Burst: 1 00:14:11.265 00:14:11.265 Power Management 00:14:11.265 ================ 00:14:11.265 Number of Power States: 1 00:14:11.265 Current Power State: Power State #0 00:14:11.265 Power State #0: 00:14:11.265 Max Power: 0.00 W 00:14:11.265 Non-Operational State: Operational 00:14:11.265 Entry Latency: Not Reported 00:14:11.265 Exit Latency: Not Reported 00:14:11.265 Relative Read Throughput: 0 00:14:11.265 Relative Read Latency: 0 00:14:11.265 Relative Write Throughput: 0 00:14:11.265 Relative Write Latency: 0 00:14:11.265 Idle Power: Not Reported 00:14:11.265 Active Power: Not Reported 00:14:11.265 Non-Operational Permissive Mode: Not Supported 00:14:11.265 00:14:11.265 Health Information 00:14:11.265 ================== 00:14:11.265 Critical Warnings: 00:14:11.265 Available Spare Space: OK 00:14:11.265 Temperature: OK 00:14:11.265 Device Reliability: OK 00:14:11.265 Read Only: No 00:14:11.265 Volatile Memory Backup: OK 00:14:11.265 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:11.265 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:11.265 Available Spare: 0% 00:14:11.265 Available Sp[2024-11-19 09:16:12.085278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:11.265 [2024-11-19 09:16:12.085290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:11.265 [2024-11-19 09:16:12.085315] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:11.265 [2024-11-19 09:16:12.085323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.265 [2024-11-19 09:16:12.085329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.265 [2024-11-19 09:16:12.085335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.265 [2024-11-19 09:16:12.085340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:11.265 [2024-11-19 09:16:12.088957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:11.265 [2024-11-19 09:16:12.088968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:11.265 [2024-11-19 09:16:12.089452] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:11.265 [2024-11-19 09:16:12.089504] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:11.265 [2024-11-19 09:16:12.089510] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:11.265 [2024-11-19 09:16:12.090462] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:11.265 [2024-11-19 09:16:12.090473] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:11.265 [2024-11-19 09:16:12.090522] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:11.265 [2024-11-19 09:16:12.092503] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:11.265 are Threshold: 0% 00:14:11.265 Life Percentage Used: 0% 00:14:11.265 Data Units Read: 0 00:14:11.265 Data Units Written: 0 00:14:11.265 Host Read Commands: 0 00:14:11.265 Host Write Commands: 0 00:14:11.265 Controller Busy Time: 0 minutes 00:14:11.265 Power Cycles: 0 00:14:11.265 Power On Hours: 0 hours 00:14:11.265 Unsafe Shutdowns: 0 00:14:11.265 Unrecoverable Media Errors: 0 00:14:11.265 Lifetime Error Log Entries: 0 00:14:11.265 Warning Temperature Time: 0 minutes 00:14:11.265 Critical Temperature Time: 0 minutes 00:14:11.265 00:14:11.265 Number of Queues 00:14:11.265 ================ 00:14:11.265 Number of I/O Submission Queues: 127 00:14:11.265 Number of I/O Completion Queues: 127 00:14:11.265 00:14:11.265 Active Namespaces 00:14:11.266 ================= 00:14:11.266 Namespace ID:1 00:14:11.266 Error Recovery Timeout: Unlimited 00:14:11.266 Command Set Identifier: NVM (00h) 00:14:11.266 Deallocate: Supported 00:14:11.266 Deallocated/Unwritten Error: Not Supported 00:14:11.266 Deallocated Read Value: Unknown 00:14:11.266 Deallocate in Write Zeroes: Not Supported 00:14:11.266 Deallocated Guard Field: 0xFFFF 00:14:11.266 Flush: Supported 00:14:11.266 Reservation: Supported 00:14:11.266 Namespace Sharing Capabilities: Multiple Controllers 00:14:11.266 Size (in LBAs): 131072 (0GiB) 00:14:11.266 Capacity (in LBAs): 131072 (0GiB) 00:14:11.266 Utilization (in LBAs): 131072 (0GiB) 00:14:11.266 NGUID: 4D7AFD2C1567477DB6B8ADF5124A3803 00:14:11.266 UUID: 4d7afd2c-1567-477d-b6b8-adf5124a3803 00:14:11.266 Thin Provisioning: Not Supported 00:14:11.266 Per-NS Atomic Units: Yes 00:14:11.266 Atomic Boundary Size (Normal): 0 00:14:11.266 Atomic Boundary Size (PFail): 0 00:14:11.266 Atomic Boundary Offset: 0 00:14:11.266 Maximum Single Source Range Length: 65535 00:14:11.266 Maximum Copy Length: 65535 00:14:11.266 Maximum Source Range Count: 1 00:14:11.266 NGUID/EUI64 Never Reused: No 00:14:11.266 Namespace Write Protected: No 00:14:11.266 Number of LBA Formats: 1 00:14:11.266 Current LBA Format: LBA Format #00 00:14:11.266 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:11.266 00:14:11.266 09:16:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:11.525 [2024-11-19 09:16:12.330798] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:16.799 Initializing NVMe Controllers 00:14:16.799 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:16.799 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:16.799 Initialization complete. Launching workers. 00:14:16.799 ======================================================== 00:14:16.799 Latency(us) 00:14:16.799 Device Information : IOPS MiB/s Average min max 00:14:16.799 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39981.56 156.18 3201.82 961.23 7596.21 00:14:16.799 ======================================================== 00:14:16.799 Total : 39981.56 156.18 3201.82 961.23 7596.21 00:14:16.799 00:14:16.799 [2024-11-19 09:16:17.353008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:16.799 09:16:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:16.799 [2024-11-19 09:16:17.593140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:22.070 Initializing NVMe Controllers 00:14:22.070 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:22.070 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:22.070 Initialization complete. Launching workers. 00:14:22.070 ======================================================== 00:14:22.070 Latency(us) 00:14:22.070 Device Information : IOPS MiB/s Average min max 00:14:22.070 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.43 62.72 7976.68 5968.90 8980.78 00:14:22.070 ======================================================== 00:14:22.070 Total : 16057.43 62.72 7976.68 5968.90 8980.78 00:14:22.070 00:14:22.070 [2024-11-19 09:16:22.634533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:22.070 09:16:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:22.070 [2024-11-19 09:16:22.849570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:27.344 [2024-11-19 09:16:27.942318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:27.344 Initializing NVMe Controllers 00:14:27.344 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:27.344 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:27.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:27.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:27.344 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:27.344 Initialization complete. Launching workers. 00:14:27.344 Starting thread on core 2 00:14:27.344 Starting thread on core 3 00:14:27.344 Starting thread on core 1 00:14:27.344 09:16:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:27.344 [2024-11-19 09:16:28.246363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.640 [2024-11-19 09:16:31.440141] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.640 Initializing NVMe Controllers 00:14:30.640 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.640 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.640 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:30.640 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:30.640 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:30.640 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:30.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:30.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:30.640 Initialization complete. Launching workers. 00:14:30.640 Starting thread on core 1 with urgent priority queue 00:14:30.640 Starting thread on core 2 with urgent priority queue 00:14:30.640 Starting thread on core 3 with urgent priority queue 00:14:30.640 Starting thread on core 0 with urgent priority queue 00:14:30.640 SPDK bdev Controller (SPDK1 ) core 0: 8575.00 IO/s 11.66 secs/100000 ios 00:14:30.640 SPDK bdev Controller (SPDK1 ) core 1: 7175.33 IO/s 13.94 secs/100000 ios 00:14:30.640 SPDK bdev Controller (SPDK1 ) core 2: 6347.67 IO/s 15.75 secs/100000 ios 00:14:30.640 SPDK bdev Controller (SPDK1 ) core 3: 6406.33 IO/s 15.61 secs/100000 ios 00:14:30.640 ======================================================== 00:14:30.640 00:14:30.640 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:30.899 [2024-11-19 09:16:31.727415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.899 Initializing NVMe Controllers 00:14:30.900 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.900 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.900 Namespace ID: 1 size: 0GB 00:14:30.900 Initialization complete. 00:14:30.900 INFO: using host memory buffer for IO 00:14:30.900 Hello world! 00:14:30.900 [2024-11-19 09:16:31.761615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.900 09:16:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:31.159 [2024-11-19 09:16:32.046314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.097 Initializing NVMe Controllers 00:14:32.097 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.097 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.097 Initialization complete. Launching workers. 00:14:32.097 submit (in ns) avg, min, max = 6892.5, 3241.7, 3999804.3 00:14:32.097 complete (in ns) avg, min, max = 19391.1, 1784.3, 4167574.8 00:14:32.097 00:14:32.097 Submit histogram 00:14:32.097 ================ 00:14:32.097 Range in us Cumulative Count 00:14:32.097 3.242 - 3.256: 0.0365% ( 6) 00:14:32.097 3.256 - 3.270: 0.1216% ( 14) 00:14:32.097 3.270 - 3.283: 0.2128% ( 15) 00:14:32.097 3.283 - 3.297: 0.3343% ( 20) 00:14:32.097 3.297 - 3.311: 0.5045% ( 28) 00:14:32.097 3.311 - 3.325: 0.8206% ( 52) 00:14:32.097 3.325 - 3.339: 2.5834% ( 290) 00:14:32.097 3.339 - 3.353: 6.7655% ( 688) 00:14:32.097 3.353 - 3.367: 11.5920% ( 794) 00:14:32.097 3.367 - 3.381: 16.8379% ( 863) 00:14:32.097 3.381 - 3.395: 23.1901% ( 1045) 00:14:32.097 3.395 - 3.409: 29.4207% ( 1025) 00:14:32.097 3.409 - 3.423: 34.8246% ( 889) 00:14:32.097 3.423 - 3.437: 39.9976% ( 851) 00:14:32.097 3.437 - 3.450: 44.8058% ( 791) 00:14:32.097 3.450 - 3.464: 49.1642% ( 717) 00:14:32.097 3.464 - 3.478: 53.4375% ( 703) 00:14:32.097 3.478 - 3.492: 59.5283% ( 1002) 00:14:32.097 3.492 - 3.506: 65.5705% ( 994) 00:14:32.097 3.506 - 3.520: 69.9532% ( 721) 00:14:32.097 3.520 - 3.534: 75.2295% ( 868) 00:14:32.097 3.534 - 3.548: 80.0195% ( 788) 00:14:32.097 3.548 - 3.562: 83.0405% ( 497) 00:14:32.097 3.562 - 3.590: 86.4689% ( 564) 00:14:32.097 3.590 - 3.617: 87.7758% ( 215) 00:14:32.097 3.617 - 3.645: 88.8153% ( 171) 00:14:32.097 3.645 - 3.673: 90.2498% ( 236) 00:14:32.097 3.673 - 3.701: 91.8607% ( 265) 00:14:32.097 3.701 - 3.729: 93.5566% ( 279) 00:14:32.097 3.729 - 3.757: 95.3437% ( 294) 00:14:32.097 3.757 - 3.784: 96.9667% ( 267) 00:14:32.097 3.784 - 3.812: 98.0609% ( 180) 00:14:32.098 3.812 - 3.840: 98.8572% ( 131) 00:14:32.098 3.840 - 3.868: 99.2949% ( 72) 00:14:32.098 3.868 - 3.896: 99.5380% ( 40) 00:14:32.098 3.896 - 3.923: 99.6110% ( 12) 00:14:32.098 3.923 - 3.951: 99.6474% ( 6) 00:14:32.098 3.951 - 3.979: 99.6596% ( 2) 00:14:32.098 3.979 - 4.007: 99.6718% ( 2) 00:14:32.098 4.063 - 4.090: 99.6778% ( 1) 00:14:32.098 4.174 - 4.202: 99.6839% ( 1) 00:14:32.098 5.398 - 5.426: 99.6900% ( 1) 00:14:32.098 5.510 - 5.537: 99.7021% ( 2) 00:14:32.098 5.537 - 5.565: 99.7082% ( 1) 00:14:32.098 5.593 - 5.621: 99.7143% ( 1) 00:14:32.098 5.704 - 5.732: 99.7204% ( 1) 00:14:32.098 5.732 - 5.760: 99.7265% ( 1) 00:14:32.098 5.788 - 5.816: 99.7325% ( 1) 00:14:32.098 6.094 - 6.122: 99.7386% ( 1) 00:14:32.098 6.122 - 6.150: 99.7447% ( 1) 00:14:32.098 6.233 - 6.261: 99.7508% ( 1) 00:14:32.098 6.317 - 6.344: 99.7569% ( 1) 00:14:32.098 6.372 - 6.400: 99.7629% ( 1) 00:14:32.098 6.428 - 6.456: 99.7690% ( 1) 00:14:32.098 6.567 - 6.595: 99.7751% ( 1) 00:14:32.098 6.678 - 6.706: 99.7812% ( 1) 00:14:32.098 6.734 - 6.762: 99.7872% ( 1) 00:14:32.098 6.762 - 6.790: 99.7933% ( 1) 00:14:32.098 6.901 - 6.929: 99.8055% ( 2) 00:14:32.098 6.929 - 6.957: 99.8116% ( 1) 00:14:32.098 7.012 - 7.040: 99.8176% ( 1) 00:14:32.098 7.040 - 7.068: 99.8237% ( 1) 00:14:32.098 7.123 - 7.179: 99.8298% ( 1) 00:14:32.098 7.402 - 7.457: 99.8359% ( 1) 00:14:32.098 7.569 - 7.624: 99.8480% ( 2) 00:14:32.098 7.847 - 7.903: 99.8602% ( 2) 00:14:32.098 7.958 - 8.014: 99.8663% ( 1) 00:14:32.098 8.125 - 8.181: 99.8723% ( 1) 00:14:32.098 8.292 - 8.348: 99.8784% ( 1) 00:14:32.098 8.403 - 8.459: 99.8845% ( 1) 00:14:32.098 8.459 - 8.515: 99.8906% ( 1) 00:14:32.098 8.960 - 9.016: 99.8967% ( 1) 00:14:32.098 9.127 - 9.183: 99.9027% ( 1) 00:14:32.098 13.412 - 13.468: 99.9088% ( 1) 00:14:32.098 13.857 - 13.913: 99.9149% ( 1) 00:14:32.098 3989.148 - 4017.642: 100.0000% ( 14) 00:14:32.098 00:14:32.098 Complete histogram 00:14:32.098 ================== 00:14:32.098 Range in us Cumulative Count 00:14:32.098 1.781 - 1.795: 0.0304% ( 5) 00:14:32.098 1.795 - [2024-11-19 09:16:33.068302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.098 1.809: 0.0426% ( 2) 00:14:32.098 1.809 - 1.823: 0.6139% ( 94) 00:14:32.098 1.823 - 1.837: 2.0242% ( 232) 00:14:32.098 1.837 - 1.850: 3.6594% ( 269) 00:14:32.098 1.850 - 1.864: 5.5620% ( 313) 00:14:32.098 1.864 - 1.878: 38.2834% ( 5383) 00:14:32.098 1.878 - 1.892: 84.0861% ( 7535) 00:14:32.098 1.892 - 1.906: 92.5050% ( 1385) 00:14:32.098 1.906 - 1.920: 95.5869% ( 507) 00:14:32.098 1.920 - 1.934: 96.3893% ( 132) 00:14:32.098 1.934 - 1.948: 97.3923% ( 165) 00:14:32.098 1.948 - 1.962: 98.5350% ( 188) 00:14:32.098 1.962 - 1.976: 99.1064% ( 94) 00:14:32.098 1.976 - 1.990: 99.2402% ( 22) 00:14:32.098 1.990 - 2.003: 99.2706% ( 5) 00:14:32.098 2.003 - 2.017: 99.3010% ( 5) 00:14:32.098 2.031 - 2.045: 99.3192% ( 3) 00:14:32.098 2.073 - 2.087: 99.3374% ( 3) 00:14:32.098 2.087 - 2.101: 99.3435% ( 1) 00:14:32.098 2.129 - 2.143: 99.3557% ( 2) 00:14:32.098 2.157 - 2.170: 99.3617% ( 1) 00:14:32.098 2.170 - 2.184: 99.3678% ( 1) 00:14:32.098 2.198 - 2.212: 99.3739% ( 1) 00:14:32.098 2.254 - 2.268: 99.3800% ( 1) 00:14:32.098 2.323 - 2.337: 99.3861% ( 1) 00:14:32.098 2.727 - 2.741: 99.3921% ( 1) 00:14:32.098 3.673 - 3.701: 99.3982% ( 1) 00:14:32.098 3.812 - 3.840: 99.4043% ( 1) 00:14:32.098 3.868 - 3.896: 99.4104% ( 1) 00:14:32.098 3.923 - 3.951: 99.4164% ( 1) 00:14:32.098 4.508 - 4.536: 99.4225% ( 1) 00:14:32.098 4.758 - 4.786: 99.4286% ( 1) 00:14:32.098 5.037 - 5.064: 99.4347% ( 1) 00:14:32.098 5.120 - 5.148: 99.4468% ( 2) 00:14:32.098 5.259 - 5.287: 99.4529% ( 1) 00:14:32.098 5.398 - 5.426: 99.4590% ( 1) 00:14:32.098 5.482 - 5.510: 99.4651% ( 1) 00:14:32.098 5.593 - 5.621: 99.4712% ( 1) 00:14:32.098 5.677 - 5.704: 99.4772% ( 1) 00:14:32.098 5.760 - 5.788: 99.4833% ( 1) 00:14:32.098 6.150 - 6.177: 99.4955% ( 2) 00:14:32.098 6.205 - 6.233: 99.5016% ( 1) 00:14:32.098 6.817 - 6.845: 99.5076% ( 1) 00:14:32.098 6.845 - 6.873: 99.5137% ( 1) 00:14:32.098 7.040 - 7.068: 99.5198% ( 1) 00:14:32.098 7.096 - 7.123: 99.5259% ( 1) 00:14:32.098 7.457 - 7.513: 99.5319% ( 1) 00:14:32.098 7.736 - 7.791: 99.5380% ( 1) 00:14:32.098 7.903 - 7.958: 99.5441% ( 1) 00:14:32.098 9.683 - 9.739: 99.5502% ( 1) 00:14:32.098 11.409 - 11.464: 99.5563% ( 1) 00:14:32.098 138.017 - 138.908: 99.5623% ( 1) 00:14:32.098 3989.148 - 4017.642: 99.9939% ( 71) 00:14:32.098 4160.111 - 4188.605: 100.0000% ( 1) 00:14:32.098 00:14:32.098 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:32.098 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:32.098 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:32.098 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:32.098 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:32.357 [ 00:14:32.357 { 00:14:32.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:32.357 "subtype": "Discovery", 00:14:32.357 "listen_addresses": [], 00:14:32.357 "allow_any_host": true, 00:14:32.357 "hosts": [] 00:14:32.357 }, 00:14:32.357 { 00:14:32.357 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:32.357 "subtype": "NVMe", 00:14:32.357 "listen_addresses": [ 00:14:32.357 { 00:14:32.357 "trtype": "VFIOUSER", 00:14:32.357 "adrfam": "IPv4", 00:14:32.357 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:32.357 "trsvcid": "0" 00:14:32.357 } 00:14:32.357 ], 00:14:32.357 "allow_any_host": true, 00:14:32.357 "hosts": [], 00:14:32.357 "serial_number": "SPDK1", 00:14:32.357 "model_number": "SPDK bdev Controller", 00:14:32.357 "max_namespaces": 32, 00:14:32.357 "min_cntlid": 1, 00:14:32.357 "max_cntlid": 65519, 00:14:32.357 "namespaces": [ 00:14:32.357 { 00:14:32.357 "nsid": 1, 00:14:32.357 "bdev_name": "Malloc1", 00:14:32.357 "name": "Malloc1", 00:14:32.357 "nguid": "4D7AFD2C1567477DB6B8ADF5124A3803", 00:14:32.357 "uuid": "4d7afd2c-1567-477d-b6b8-adf5124a3803" 00:14:32.357 } 00:14:32.357 ] 00:14:32.357 }, 00:14:32.357 { 00:14:32.357 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:32.357 "subtype": "NVMe", 00:14:32.357 "listen_addresses": [ 00:14:32.357 { 00:14:32.357 "trtype": "VFIOUSER", 00:14:32.357 "adrfam": "IPv4", 00:14:32.357 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:32.357 "trsvcid": "0" 00:14:32.357 } 00:14:32.357 ], 00:14:32.357 "allow_any_host": true, 00:14:32.357 "hosts": [], 00:14:32.357 "serial_number": "SPDK2", 00:14:32.357 "model_number": "SPDK bdev Controller", 00:14:32.357 "max_namespaces": 32, 00:14:32.357 "min_cntlid": 1, 00:14:32.357 "max_cntlid": 65519, 00:14:32.357 "namespaces": [ 00:14:32.357 { 00:14:32.357 "nsid": 1, 00:14:32.357 "bdev_name": "Malloc2", 00:14:32.357 "name": "Malloc2", 00:14:32.357 "nguid": "D7915B39C2D74283962B9C04E326E07D", 00:14:32.357 "uuid": "d7915b39-c2d7-4283-962b-9c04e326e07d" 00:14:32.357 } 00:14:32.357 ] 00:14:32.357 } 00:14:32.357 ] 00:14:32.357 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1079133 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:32.358 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:32.617 [2024-11-19 09:16:33.462412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.617 Malloc3 00:14:32.617 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:32.876 [2024-11-19 09:16:33.710268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.876 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:32.876 Asynchronous Event Request test 00:14:32.876 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.876 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.876 Registering asynchronous event callbacks... 00:14:32.876 Starting namespace attribute notice tests for all controllers... 00:14:32.876 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:32.876 aer_cb - Changed Namespace 00:14:32.876 Cleaning up... 00:14:32.876 [ 00:14:32.876 { 00:14:32.876 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:32.876 "subtype": "Discovery", 00:14:32.876 "listen_addresses": [], 00:14:32.876 "allow_any_host": true, 00:14:32.876 "hosts": [] 00:14:32.876 }, 00:14:32.876 { 00:14:32.876 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:32.876 "subtype": "NVMe", 00:14:32.876 "listen_addresses": [ 00:14:32.876 { 00:14:32.876 "trtype": "VFIOUSER", 00:14:32.876 "adrfam": "IPv4", 00:14:32.876 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:32.876 "trsvcid": "0" 00:14:32.876 } 00:14:32.876 ], 00:14:32.876 "allow_any_host": true, 00:14:32.876 "hosts": [], 00:14:32.876 "serial_number": "SPDK1", 00:14:32.876 "model_number": "SPDK bdev Controller", 00:14:32.876 "max_namespaces": 32, 00:14:32.876 "min_cntlid": 1, 00:14:32.876 "max_cntlid": 65519, 00:14:32.876 "namespaces": [ 00:14:32.876 { 00:14:32.876 "nsid": 1, 00:14:32.876 "bdev_name": "Malloc1", 00:14:32.876 "name": "Malloc1", 00:14:32.877 "nguid": "4D7AFD2C1567477DB6B8ADF5124A3803", 00:14:32.877 "uuid": "4d7afd2c-1567-477d-b6b8-adf5124a3803" 00:14:32.877 }, 00:14:32.877 { 00:14:32.877 "nsid": 2, 00:14:32.877 "bdev_name": "Malloc3", 00:14:32.877 "name": "Malloc3", 00:14:32.877 "nguid": "FA3B3BCD0D374D789CD6E2ADD6FEA943", 00:14:32.877 "uuid": "fa3b3bcd-0d37-4d78-9cd6-e2add6fea943" 00:14:32.877 } 00:14:32.877 ] 00:14:32.877 }, 00:14:32.877 { 00:14:32.877 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:32.877 "subtype": "NVMe", 00:14:32.877 "listen_addresses": [ 00:14:32.877 { 00:14:32.877 "trtype": "VFIOUSER", 00:14:32.877 "adrfam": "IPv4", 00:14:32.877 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:32.877 "trsvcid": "0" 00:14:32.877 } 00:14:32.877 ], 00:14:32.877 "allow_any_host": true, 00:14:32.877 "hosts": [], 00:14:32.877 "serial_number": "SPDK2", 00:14:32.877 "model_number": "SPDK bdev Controller", 00:14:32.877 "max_namespaces": 32, 00:14:32.877 "min_cntlid": 1, 00:14:32.877 "max_cntlid": 65519, 00:14:32.877 "namespaces": [ 00:14:32.877 { 00:14:32.877 "nsid": 1, 00:14:32.877 "bdev_name": "Malloc2", 00:14:32.877 "name": "Malloc2", 00:14:32.877 "nguid": "D7915B39C2D74283962B9C04E326E07D", 00:14:32.877 "uuid": "d7915b39-c2d7-4283-962b-9c04e326e07d" 00:14:32.877 } 00:14:32.877 ] 00:14:32.877 } 00:14:32.877 ] 00:14:32.877 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1079133 00:14:32.877 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.877 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:33.138 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:33.138 09:16:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:33.138 [2024-11-19 09:16:33.959802] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:14:33.138 [2024-11-19 09:16:33.959849] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079150 ] 00:14:33.138 [2024-11-19 09:16:34.001732] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:33.138 [2024-11-19 09:16:34.009161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.138 [2024-11-19 09:16:34.009181] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0ea20ce000 00:14:33.138 [2024-11-19 09:16:34.010167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.011181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.012182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.013188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.014200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.015203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.016214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.018951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.138 [2024-11-19 09:16:34.019233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.138 [2024-11-19 09:16:34.019246] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0ea20c3000 00:14:33.138 [2024-11-19 09:16:34.020181] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.138 [2024-11-19 09:16:34.029705] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:33.138 [2024-11-19 09:16:34.029730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:33.138 [2024-11-19 09:16:34.034808] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:33.138 [2024-11-19 09:16:34.034847] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:33.138 [2024-11-19 09:16:34.034921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:33.138 [2024-11-19 09:16:34.034936] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:33.138 [2024-11-19 09:16:34.034942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:33.138 [2024-11-19 09:16:34.035811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:33.138 [2024-11-19 09:16:34.035822] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:33.138 [2024-11-19 09:16:34.035828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:33.138 [2024-11-19 09:16:34.036818] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:33.138 [2024-11-19 09:16:34.036827] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:33.138 [2024-11-19 09:16:34.036834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:33.138 [2024-11-19 09:16:34.037825] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:33.138 [2024-11-19 09:16:34.037834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:33.138 [2024-11-19 09:16:34.038831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:33.138 [2024-11-19 09:16:34.038839] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:33.138 [2024-11-19 09:16:34.038845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:33.138 [2024-11-19 09:16:34.038851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:33.138 [2024-11-19 09:16:34.038958] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:33.138 [2024-11-19 09:16:34.038963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:33.138 [2024-11-19 09:16:34.038967] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:33.138 [2024-11-19 09:16:34.039834] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:33.138 [2024-11-19 09:16:34.040845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:33.138 [2024-11-19 09:16:34.041857] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:33.138 [2024-11-19 09:16:34.042857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:33.138 [2024-11-19 09:16:34.042896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:33.138 [2024-11-19 09:16:34.043952] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:33.138 [2024-11-19 09:16:34.043961] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:33.138 [2024-11-19 09:16:34.043965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:33.138 [2024-11-19 09:16:34.043983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:33.138 [2024-11-19 09:16:34.043994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:33.138 [2024-11-19 09:16:34.044006] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.138 [2024-11-19 09:16:34.044011] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.138 [2024-11-19 09:16:34.044014] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.138 [2024-11-19 09:16:34.044025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.138 [2024-11-19 09:16:34.051956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.051967] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:33.139 [2024-11-19 09:16:34.051972] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:33.139 [2024-11-19 09:16:34.051975] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:33.139 [2024-11-19 09:16:34.051980] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:33.139 [2024-11-19 09:16:34.051985] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:33.139 [2024-11-19 09:16:34.051992] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:33.139 [2024-11-19 09:16:34.051997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.052005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.052014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.059952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.059968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.139 [2024-11-19 09:16:34.059976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.139 [2024-11-19 09:16:34.059985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.139 [2024-11-19 09:16:34.059993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.139 [2024-11-19 09:16:34.059998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.060004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.060012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.067956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.067970] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:33.139 [2024-11-19 09:16:34.067976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.067982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.067988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.067996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.075953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.076012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.076019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.076026] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:33.139 [2024-11-19 09:16:34.076031] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:33.139 [2024-11-19 09:16:34.076034] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.139 [2024-11-19 09:16:34.076040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.083954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.083966] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:33.139 [2024-11-19 09:16:34.083974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.083981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.083987] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.139 [2024-11-19 09:16:34.083991] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.139 [2024-11-19 09:16:34.083994] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.139 [2024-11-19 09:16:34.084002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.091954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.091968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.091976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.091982] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.139 [2024-11-19 09:16:34.091987] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.139 [2024-11-19 09:16:34.091990] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.139 [2024-11-19 09:16:34.091995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.097953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.097964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.097970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.097978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.097983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.097988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.097993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.097997] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:33.139 [2024-11-19 09:16:34.098002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:33.139 [2024-11-19 09:16:34.098006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:33.139 [2024-11-19 09:16:34.098021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.107954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.107967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.115953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.115966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.123953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.123965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.131955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:33.139 [2024-11-19 09:16:34.131970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:33.139 [2024-11-19 09:16:34.131974] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:33.139 [2024-11-19 09:16:34.131978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:33.139 [2024-11-19 09:16:34.131981] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:33.139 [2024-11-19 09:16:34.131983] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:33.139 [2024-11-19 09:16:34.131990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:33.139 [2024-11-19 09:16:34.131997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:33.139 [2024-11-19 09:16:34.132000] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:33.139 [2024-11-19 09:16:34.132004] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.139 [2024-11-19 09:16:34.132009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.132015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:33.139 [2024-11-19 09:16:34.132019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.139 [2024-11-19 09:16:34.132022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.139 [2024-11-19 09:16:34.132027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.139 [2024-11-19 09:16:34.132035] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:33.140 [2024-11-19 09:16:34.132039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:33.140 [2024-11-19 09:16:34.132042] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.140 [2024-11-19 09:16:34.132047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:33.140 [2024-11-19 09:16:34.139953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:33.140 [2024-11-19 09:16:34.139966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:33.140 [2024-11-19 09:16:34.139976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:33.140 [2024-11-19 09:16:34.139982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:33.140 ===================================================== 00:14:33.140 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:33.140 ===================================================== 00:14:33.140 Controller Capabilities/Features 00:14:33.140 ================================ 00:14:33.140 Vendor ID: 4e58 00:14:33.140 Subsystem Vendor ID: 4e58 00:14:33.140 Serial Number: SPDK2 00:14:33.140 Model Number: SPDK bdev Controller 00:14:33.140 Firmware Version: 25.01 00:14:33.140 Recommended Arb Burst: 6 00:14:33.140 IEEE OUI Identifier: 8d 6b 50 00:14:33.140 Multi-path I/O 00:14:33.140 May have multiple subsystem ports: Yes 00:14:33.140 May have multiple controllers: Yes 00:14:33.140 Associated with SR-IOV VF: No 00:14:33.140 Max Data Transfer Size: 131072 00:14:33.140 Max Number of Namespaces: 32 00:14:33.140 Max Number of I/O Queues: 127 00:14:33.140 NVMe Specification Version (VS): 1.3 00:14:33.140 NVMe Specification Version (Identify): 1.3 00:14:33.140 Maximum Queue Entries: 256 00:14:33.140 Contiguous Queues Required: Yes 00:14:33.140 Arbitration Mechanisms Supported 00:14:33.140 Weighted Round Robin: Not Supported 00:14:33.140 Vendor Specific: Not Supported 00:14:33.140 Reset Timeout: 15000 ms 00:14:33.140 Doorbell Stride: 4 bytes 00:14:33.140 NVM Subsystem Reset: Not Supported 00:14:33.140 Command Sets Supported 00:14:33.140 NVM Command Set: Supported 00:14:33.140 Boot Partition: Not Supported 00:14:33.140 Memory Page Size Minimum: 4096 bytes 00:14:33.140 Memory Page Size Maximum: 4096 bytes 00:14:33.140 Persistent Memory Region: Not Supported 00:14:33.140 Optional Asynchronous Events Supported 00:14:33.140 Namespace Attribute Notices: Supported 00:14:33.140 Firmware Activation Notices: Not Supported 00:14:33.140 ANA Change Notices: Not Supported 00:14:33.140 PLE Aggregate Log Change Notices: Not Supported 00:14:33.140 LBA Status Info Alert Notices: Not Supported 00:14:33.140 EGE Aggregate Log Change Notices: Not Supported 00:14:33.140 Normal NVM Subsystem Shutdown event: Not Supported 00:14:33.140 Zone Descriptor Change Notices: Not Supported 00:14:33.140 Discovery Log Change Notices: Not Supported 00:14:33.140 Controller Attributes 00:14:33.140 128-bit Host Identifier: Supported 00:14:33.140 Non-Operational Permissive Mode: Not Supported 00:14:33.140 NVM Sets: Not Supported 00:14:33.140 Read Recovery Levels: Not Supported 00:14:33.140 Endurance Groups: Not Supported 00:14:33.140 Predictable Latency Mode: Not Supported 00:14:33.140 Traffic Based Keep ALive: Not Supported 00:14:33.140 Namespace Granularity: Not Supported 00:14:33.140 SQ Associations: Not Supported 00:14:33.140 UUID List: Not Supported 00:14:33.140 Multi-Domain Subsystem: Not Supported 00:14:33.140 Fixed Capacity Management: Not Supported 00:14:33.140 Variable Capacity Management: Not Supported 00:14:33.140 Delete Endurance Group: Not Supported 00:14:33.140 Delete NVM Set: Not Supported 00:14:33.140 Extended LBA Formats Supported: Not Supported 00:14:33.140 Flexible Data Placement Supported: Not Supported 00:14:33.140 00:14:33.140 Controller Memory Buffer Support 00:14:33.140 ================================ 00:14:33.140 Supported: No 00:14:33.140 00:14:33.140 Persistent Memory Region Support 00:14:33.140 ================================ 00:14:33.140 Supported: No 00:14:33.140 00:14:33.140 Admin Command Set Attributes 00:14:33.140 ============================ 00:14:33.140 Security Send/Receive: Not Supported 00:14:33.140 Format NVM: Not Supported 00:14:33.140 Firmware Activate/Download: Not Supported 00:14:33.140 Namespace Management: Not Supported 00:14:33.140 Device Self-Test: Not Supported 00:14:33.140 Directives: Not Supported 00:14:33.140 NVMe-MI: Not Supported 00:14:33.140 Virtualization Management: Not Supported 00:14:33.140 Doorbell Buffer Config: Not Supported 00:14:33.140 Get LBA Status Capability: Not Supported 00:14:33.140 Command & Feature Lockdown Capability: Not Supported 00:14:33.140 Abort Command Limit: 4 00:14:33.140 Async Event Request Limit: 4 00:14:33.140 Number of Firmware Slots: N/A 00:14:33.140 Firmware Slot 1 Read-Only: N/A 00:14:33.140 Firmware Activation Without Reset: N/A 00:14:33.140 Multiple Update Detection Support: N/A 00:14:33.140 Firmware Update Granularity: No Information Provided 00:14:33.140 Per-Namespace SMART Log: No 00:14:33.140 Asymmetric Namespace Access Log Page: Not Supported 00:14:33.140 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:33.140 Command Effects Log Page: Supported 00:14:33.140 Get Log Page Extended Data: Supported 00:14:33.140 Telemetry Log Pages: Not Supported 00:14:33.140 Persistent Event Log Pages: Not Supported 00:14:33.140 Supported Log Pages Log Page: May Support 00:14:33.140 Commands Supported & Effects Log Page: Not Supported 00:14:33.140 Feature Identifiers & Effects Log Page:May Support 00:14:33.140 NVMe-MI Commands & Effects Log Page: May Support 00:14:33.140 Data Area 4 for Telemetry Log: Not Supported 00:14:33.140 Error Log Page Entries Supported: 128 00:14:33.140 Keep Alive: Supported 00:14:33.140 Keep Alive Granularity: 10000 ms 00:14:33.140 00:14:33.140 NVM Command Set Attributes 00:14:33.140 ========================== 00:14:33.140 Submission Queue Entry Size 00:14:33.140 Max: 64 00:14:33.140 Min: 64 00:14:33.140 Completion Queue Entry Size 00:14:33.140 Max: 16 00:14:33.140 Min: 16 00:14:33.140 Number of Namespaces: 32 00:14:33.140 Compare Command: Supported 00:14:33.140 Write Uncorrectable Command: Not Supported 00:14:33.140 Dataset Management Command: Supported 00:14:33.140 Write Zeroes Command: Supported 00:14:33.140 Set Features Save Field: Not Supported 00:14:33.140 Reservations: Not Supported 00:14:33.140 Timestamp: Not Supported 00:14:33.140 Copy: Supported 00:14:33.140 Volatile Write Cache: Present 00:14:33.140 Atomic Write Unit (Normal): 1 00:14:33.140 Atomic Write Unit (PFail): 1 00:14:33.140 Atomic Compare & Write Unit: 1 00:14:33.140 Fused Compare & Write: Supported 00:14:33.140 Scatter-Gather List 00:14:33.140 SGL Command Set: Supported (Dword aligned) 00:14:33.140 SGL Keyed: Not Supported 00:14:33.140 SGL Bit Bucket Descriptor: Not Supported 00:14:33.141 SGL Metadata Pointer: Not Supported 00:14:33.141 Oversized SGL: Not Supported 00:14:33.141 SGL Metadata Address: Not Supported 00:14:33.141 SGL Offset: Not Supported 00:14:33.141 Transport SGL Data Block: Not Supported 00:14:33.141 Replay Protected Memory Block: Not Supported 00:14:33.141 00:14:33.141 Firmware Slot Information 00:14:33.141 ========================= 00:14:33.141 Active slot: 1 00:14:33.141 Slot 1 Firmware Revision: 25.01 00:14:33.141 00:14:33.141 00:14:33.141 Commands Supported and Effects 00:14:33.141 ============================== 00:14:33.141 Admin Commands 00:14:33.141 -------------- 00:14:33.141 Get Log Page (02h): Supported 00:14:33.141 Identify (06h): Supported 00:14:33.141 Abort (08h): Supported 00:14:33.141 Set Features (09h): Supported 00:14:33.141 Get Features (0Ah): Supported 00:14:33.141 Asynchronous Event Request (0Ch): Supported 00:14:33.141 Keep Alive (18h): Supported 00:14:33.141 I/O Commands 00:14:33.141 ------------ 00:14:33.141 Flush (00h): Supported LBA-Change 00:14:33.141 Write (01h): Supported LBA-Change 00:14:33.141 Read (02h): Supported 00:14:33.141 Compare (05h): Supported 00:14:33.141 Write Zeroes (08h): Supported LBA-Change 00:14:33.141 Dataset Management (09h): Supported LBA-Change 00:14:33.141 Copy (19h): Supported LBA-Change 00:14:33.141 00:14:33.141 Error Log 00:14:33.141 ========= 00:14:33.141 00:14:33.141 Arbitration 00:14:33.141 =========== 00:14:33.141 Arbitration Burst: 1 00:14:33.141 00:14:33.141 Power Management 00:14:33.141 ================ 00:14:33.141 Number of Power States: 1 00:14:33.141 Current Power State: Power State #0 00:14:33.141 Power State #0: 00:14:33.141 Max Power: 0.00 W 00:14:33.141 Non-Operational State: Operational 00:14:33.141 Entry Latency: Not Reported 00:14:33.141 Exit Latency: Not Reported 00:14:33.141 Relative Read Throughput: 0 00:14:33.141 Relative Read Latency: 0 00:14:33.141 Relative Write Throughput: 0 00:14:33.141 Relative Write Latency: 0 00:14:33.141 Idle Power: Not Reported 00:14:33.141 Active Power: Not Reported 00:14:33.141 Non-Operational Permissive Mode: Not Supported 00:14:33.141 00:14:33.141 Health Information 00:14:33.141 ================== 00:14:33.141 Critical Warnings: 00:14:33.141 Available Spare Space: OK 00:14:33.141 Temperature: OK 00:14:33.141 Device Reliability: OK 00:14:33.141 Read Only: No 00:14:33.141 Volatile Memory Backup: OK 00:14:33.141 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:33.141 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:33.141 Available Spare: 0% 00:14:33.141 Available Sp[2024-11-19 09:16:34.140075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:33.141 [2024-11-19 09:16:34.147953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:33.141 [2024-11-19 09:16:34.147982] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:33.141 [2024-11-19 09:16:34.147990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.141 [2024-11-19 09:16:34.147996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.141 [2024-11-19 09:16:34.148001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.141 [2024-11-19 09:16:34.148009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.141 [2024-11-19 09:16:34.151955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:33.141 [2024-11-19 09:16:34.151967] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:33.141 [2024-11-19 09:16:34.152089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:33.141 [2024-11-19 09:16:34.152133] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:33.141 [2024-11-19 09:16:34.152140] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:33.141 [2024-11-19 09:16:34.153096] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:33.141 [2024-11-19 09:16:34.153107] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:33.141 [2024-11-19 09:16:34.153153] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:33.141 [2024-11-19 09:16:34.154136] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.141 are Threshold: 0% 00:14:33.141 Life Percentage Used: 0% 00:14:33.141 Data Units Read: 0 00:14:33.141 Data Units Written: 0 00:14:33.141 Host Read Commands: 0 00:14:33.141 Host Write Commands: 0 00:14:33.141 Controller Busy Time: 0 minutes 00:14:33.141 Power Cycles: 0 00:14:33.141 Power On Hours: 0 hours 00:14:33.141 Unsafe Shutdowns: 0 00:14:33.141 Unrecoverable Media Errors: 0 00:14:33.141 Lifetime Error Log Entries: 0 00:14:33.141 Warning Temperature Time: 0 minutes 00:14:33.141 Critical Temperature Time: 0 minutes 00:14:33.141 00:14:33.141 Number of Queues 00:14:33.141 ================ 00:14:33.141 Number of I/O Submission Queues: 127 00:14:33.141 Number of I/O Completion Queues: 127 00:14:33.141 00:14:33.141 Active Namespaces 00:14:33.141 ================= 00:14:33.141 Namespace ID:1 00:14:33.141 Error Recovery Timeout: Unlimited 00:14:33.141 Command Set Identifier: NVM (00h) 00:14:33.141 Deallocate: Supported 00:14:33.141 Deallocated/Unwritten Error: Not Supported 00:14:33.141 Deallocated Read Value: Unknown 00:14:33.141 Deallocate in Write Zeroes: Not Supported 00:14:33.141 Deallocated Guard Field: 0xFFFF 00:14:33.141 Flush: Supported 00:14:33.141 Reservation: Supported 00:14:33.141 Namespace Sharing Capabilities: Multiple Controllers 00:14:33.141 Size (in LBAs): 131072 (0GiB) 00:14:33.141 Capacity (in LBAs): 131072 (0GiB) 00:14:33.141 Utilization (in LBAs): 131072 (0GiB) 00:14:33.141 NGUID: D7915B39C2D74283962B9C04E326E07D 00:14:33.141 UUID: d7915b39-c2d7-4283-962b-9c04e326e07d 00:14:33.141 Thin Provisioning: Not Supported 00:14:33.141 Per-NS Atomic Units: Yes 00:14:33.141 Atomic Boundary Size (Normal): 0 00:14:33.141 Atomic Boundary Size (PFail): 0 00:14:33.141 Atomic Boundary Offset: 0 00:14:33.141 Maximum Single Source Range Length: 65535 00:14:33.141 Maximum Copy Length: 65535 00:14:33.141 Maximum Source Range Count: 1 00:14:33.141 NGUID/EUI64 Never Reused: No 00:14:33.141 Namespace Write Protected: No 00:14:33.141 Number of LBA Formats: 1 00:14:33.141 Current LBA Format: LBA Format #00 00:14:33.141 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:33.141 00:14:33.142 09:16:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:33.401 [2024-11-19 09:16:34.387406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:38.672 Initializing NVMe Controllers 00:14:38.672 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:38.672 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:38.672 Initialization complete. Launching workers. 00:14:38.672 ======================================================== 00:14:38.672 Latency(us) 00:14:38.672 Device Information : IOPS MiB/s Average min max 00:14:38.672 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39913.36 155.91 3206.52 950.84 6661.09 00:14:38.672 ======================================================== 00:14:38.672 Total : 39913.36 155.91 3206.52 950.84 6661.09 00:14:38.672 00:14:38.672 [2024-11-19 09:16:39.491212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:38.672 09:16:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:38.672 [2024-11-19 09:16:39.728148] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:43.944 Initializing NVMe Controllers 00:14:43.944 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:43.944 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:43.944 Initialization complete. Launching workers. 00:14:43.944 ======================================================== 00:14:43.944 Latency(us) 00:14:43.944 Device Information : IOPS MiB/s Average min max 00:14:43.944 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39917.24 155.93 3206.22 974.18 8222.49 00:14:43.944 ======================================================== 00:14:43.944 Total : 39917.24 155.93 3206.22 974.18 8222.49 00:14:43.944 00:14:43.944 [2024-11-19 09:16:44.748569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:43.944 09:16:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:43.944 [2024-11-19 09:16:44.959997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:49.218 [2024-11-19 09:16:50.098041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:49.218 Initializing NVMe Controllers 00:14:49.218 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:49.218 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:49.218 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:49.218 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:49.218 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:49.218 Initialization complete. Launching workers. 00:14:49.218 Starting thread on core 2 00:14:49.218 Starting thread on core 3 00:14:49.218 Starting thread on core 1 00:14:49.218 09:16:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:49.477 [2024-11-19 09:16:50.398389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.768 [2024-11-19 09:16:53.615165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.768 Initializing NVMe Controllers 00:14:52.768 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:52.768 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:52.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:52.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:52.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:52.768 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:52.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:52.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:52.768 Initialization complete. Launching workers. 00:14:52.768 Starting thread on core 1 with urgent priority queue 00:14:52.768 Starting thread on core 2 with urgent priority queue 00:14:52.768 Starting thread on core 3 with urgent priority queue 00:14:52.768 Starting thread on core 0 with urgent priority queue 00:14:52.768 SPDK bdev Controller (SPDK2 ) core 0: 5117.67 IO/s 19.54 secs/100000 ios 00:14:52.768 SPDK bdev Controller (SPDK2 ) core 1: 3786.67 IO/s 26.41 secs/100000 ios 00:14:52.768 SPDK bdev Controller (SPDK2 ) core 2: 4170.00 IO/s 23.98 secs/100000 ios 00:14:52.768 SPDK bdev Controller (SPDK2 ) core 3: 4446.00 IO/s 22.49 secs/100000 ios 00:14:52.768 ======================================================== 00:14:52.768 00:14:52.768 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:53.028 [2024-11-19 09:16:53.904352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:53.028 Initializing NVMe Controllers 00:14:53.028 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:53.028 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:53.028 Namespace ID: 1 size: 0GB 00:14:53.028 Initialization complete. 00:14:53.028 INFO: using host memory buffer for IO 00:14:53.028 Hello world! 00:14:53.028 [2024-11-19 09:16:53.913420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:53.028 09:16:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:53.287 [2024-11-19 09:16:54.196930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.668 Initializing NVMe Controllers 00:14:54.668 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.668 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.668 Initialization complete. Launching workers. 00:14:54.668 submit (in ns) avg, min, max = 6373.8, 3202.6, 4000202.6 00:14:54.668 complete (in ns) avg, min, max = 22092.5, 1759.1, 4994994.8 00:14:54.668 00:14:54.668 Submit histogram 00:14:54.668 ================ 00:14:54.668 Range in us Cumulative Count 00:14:54.668 3.200 - 3.214: 0.0061% ( 1) 00:14:54.668 3.214 - 3.228: 0.0671% ( 10) 00:14:54.668 3.228 - 3.242: 0.1343% ( 11) 00:14:54.668 3.242 - 3.256: 0.1709% ( 6) 00:14:54.668 3.256 - 3.270: 0.3052% ( 22) 00:14:54.668 3.270 - 3.283: 1.0193% ( 117) 00:14:54.668 3.283 - 3.297: 4.0527% ( 497) 00:14:54.668 3.297 - 3.311: 8.4229% ( 716) 00:14:54.668 3.311 - 3.325: 13.7451% ( 872) 00:14:54.668 3.325 - 3.339: 19.6411% ( 966) 00:14:54.668 3.339 - 3.353: 25.3662% ( 938) 00:14:54.668 3.353 - 3.367: 30.7495% ( 882) 00:14:54.668 3.367 - 3.381: 36.4197% ( 929) 00:14:54.668 3.381 - 3.395: 42.1265% ( 935) 00:14:54.668 3.395 - 3.409: 46.4600% ( 710) 00:14:54.668 3.409 - 3.423: 50.2869% ( 627) 00:14:54.668 3.423 - 3.437: 54.9255% ( 760) 00:14:54.668 3.437 - 3.450: 61.2427% ( 1035) 00:14:54.668 3.450 - 3.464: 66.2415% ( 819) 00:14:54.668 3.464 - 3.478: 71.3867% ( 843) 00:14:54.668 3.478 - 3.492: 76.8127% ( 889) 00:14:54.668 3.492 - 3.506: 80.5725% ( 616) 00:14:54.668 3.506 - 3.520: 83.6182% ( 499) 00:14:54.668 3.520 - 3.534: 85.4675% ( 303) 00:14:54.668 3.534 - 3.548: 86.5234% ( 173) 00:14:54.668 3.548 - 3.562: 87.2375% ( 117) 00:14:54.668 3.562 - 3.590: 88.1714% ( 153) 00:14:54.668 3.590 - 3.617: 89.5752% ( 230) 00:14:54.668 3.617 - 3.645: 91.2109% ( 268) 00:14:54.668 3.645 - 3.673: 92.9932% ( 292) 00:14:54.668 3.673 - 3.701: 94.5862% ( 261) 00:14:54.668 3.701 - 3.729: 96.2097% ( 266) 00:14:54.668 3.729 - 3.757: 97.6685% ( 239) 00:14:54.668 3.757 - 3.784: 98.5779% ( 149) 00:14:54.668 3.784 - 3.812: 99.0601% ( 79) 00:14:54.668 3.812 - 3.840: 99.4080% ( 57) 00:14:54.668 3.840 - 3.868: 99.5728% ( 27) 00:14:54.668 3.868 - 3.896: 99.6277% ( 9) 00:14:54.668 3.896 - 3.923: 99.6521% ( 4) 00:14:54.668 3.951 - 3.979: 99.6582% ( 1) 00:14:54.668 4.007 - 4.035: 99.6704% ( 2) 00:14:54.668 5.287 - 5.315: 99.6765% ( 1) 00:14:54.668 5.315 - 5.343: 99.6826% ( 1) 00:14:54.668 5.398 - 5.426: 99.6887% ( 1) 00:14:54.668 5.426 - 5.454: 99.6948% ( 1) 00:14:54.668 5.454 - 5.482: 99.7009% ( 1) 00:14:54.668 5.482 - 5.510: 99.7070% ( 1) 00:14:54.668 5.510 - 5.537: 99.7192% ( 2) 00:14:54.668 5.537 - 5.565: 99.7253% ( 1) 00:14:54.668 5.565 - 5.593: 99.7314% ( 1) 00:14:54.668 5.593 - 5.621: 99.7437% ( 2) 00:14:54.668 5.649 - 5.677: 99.7498% ( 1) 00:14:54.668 5.704 - 5.732: 99.7559% ( 1) 00:14:54.668 6.038 - 6.066: 99.7620% ( 1) 00:14:54.668 6.094 - 6.122: 99.7681% ( 1) 00:14:54.668 6.150 - 6.177: 99.7742% ( 1) 00:14:54.668 6.177 - 6.205: 99.7803% ( 1) 00:14:54.668 6.344 - 6.372: 99.7864% ( 1) 00:14:54.668 6.511 - 6.539: 99.7925% ( 1) 00:14:54.668 6.567 - 6.595: 99.7986% ( 1) 00:14:54.668 6.623 - 6.650: 99.8047% ( 1) 00:14:54.668 6.678 - 6.706: 99.8169% ( 2) 00:14:54.668 6.957 - 6.984: 99.8230% ( 1) 00:14:54.668 7.040 - 7.068: 99.8291% ( 1) 00:14:54.668 7.096 - 7.123: 99.8413% ( 2) 00:14:54.668 7.123 - 7.179: 99.8474% ( 1) 00:14:54.668 7.179 - 7.235: 99.8535% ( 1) 00:14:54.668 7.346 - 7.402: 99.8596% ( 1) 00:14:54.668 7.402 - 7.457: 99.8718% ( 2) 00:14:54.668 7.680 - 7.736: 99.8779% ( 1) 00:14:54.668 7.736 - 7.791: 99.8840% ( 1) 00:14:54.668 7.791 - 7.847: 99.8901% ( 1) 00:14:54.668 8.348 - 8.403: 99.8962% ( 1) 00:14:54.668 8.459 - 8.515: 99.9023% ( 1) 00:14:54.668 9.461 - 9.517: 99.9084% ( 1) 00:14:54.668 9.517 - 9.572: 99.9146% ( 1) 00:14:54.668 13.301 - 13.357: 99.9207% ( 1) 00:14:54.668 19.033 - 19.144: 99.9268% ( 1) 00:14:54.668 3989.148 - 4017.642: 100.0000% ( 12) 00:14:54.668 00:14:54.668 [2024-11-19 09:16:55.290000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.668 Complete histogram 00:14:54.668 ================== 00:14:54.668 Range in us Cumulative Count 00:14:54.668 1.753 - 1.760: 0.0061% ( 1) 00:14:54.668 1.760 - 1.767: 0.0183% ( 2) 00:14:54.668 1.767 - 1.774: 0.1587% ( 23) 00:14:54.668 1.774 - 1.781: 0.3906% ( 38) 00:14:54.668 1.781 - 1.795: 0.7568% ( 60) 00:14:54.668 1.795 - 1.809: 0.7874% ( 5) 00:14:54.668 1.809 - 1.823: 4.3396% ( 582) 00:14:54.668 1.823 - 1.837: 48.9136% ( 7303) 00:14:54.668 1.837 - 1.850: 78.6621% ( 4874) 00:14:54.668 1.850 - 1.864: 83.3374% ( 766) 00:14:54.668 1.864 - 1.878: 91.5283% ( 1342) 00:14:54.668 1.878 - 1.892: 95.4712% ( 646) 00:14:54.668 1.892 - 1.906: 97.0764% ( 263) 00:14:54.668 1.906 - 1.920: 98.2361% ( 190) 00:14:54.668 1.920 - 1.934: 98.8281% ( 97) 00:14:54.668 1.934 - 1.948: 99.0234% ( 32) 00:14:54.668 1.948 - 1.962: 99.1028% ( 13) 00:14:54.668 1.962 - 1.976: 99.1760% ( 12) 00:14:54.668 1.976 - 1.990: 99.1882% ( 2) 00:14:54.668 1.990 - 2.003: 99.1943% ( 1) 00:14:54.668 2.003 - 2.017: 99.2126% ( 3) 00:14:54.668 2.017 - 2.031: 99.2371% ( 4) 00:14:54.668 2.031 - 2.045: 99.2493% ( 2) 00:14:54.668 2.045 - 2.059: 99.2554% ( 1) 00:14:54.668 2.059 - 2.073: 99.2676% ( 2) 00:14:54.668 2.073 - 2.087: 99.2798% ( 2) 00:14:54.668 2.101 - 2.115: 99.2859% ( 1) 00:14:54.668 2.129 - 2.143: 99.2920% ( 1) 00:14:54.668 2.157 - 2.170: 99.2981% ( 1) 00:14:54.668 2.226 - 2.240: 99.3042% ( 1) 00:14:54.668 2.282 - 2.296: 99.3103% ( 1) 00:14:54.669 2.296 - 2.310: 99.3225% ( 2) 00:14:54.669 2.323 - 2.337: 99.3286% ( 1) 00:14:54.669 3.784 - 3.812: 99.3347% ( 1) 00:14:54.669 3.868 - 3.896: 99.3408% ( 1) 00:14:54.669 3.979 - 4.007: 99.3469% ( 1) 00:14:54.669 4.007 - 4.035: 99.3591% ( 2) 00:14:54.669 4.063 - 4.090: 99.3652% ( 1) 00:14:54.669 4.257 - 4.285: 99.3713% ( 1) 00:14:54.669 4.452 - 4.480: 99.3774% ( 1) 00:14:54.669 4.480 - 4.508: 99.3835% ( 1) 00:14:54.669 4.508 - 4.536: 99.3896% ( 1) 00:14:54.669 4.703 - 4.730: 99.3958% ( 1) 00:14:54.669 5.120 - 5.148: 99.4019% ( 1) 00:14:54.669 5.148 - 5.176: 99.4141% ( 2) 00:14:54.669 5.343 - 5.370: 99.4202% ( 1) 00:14:54.669 5.370 - 5.398: 99.4263% ( 1) 00:14:54.669 5.426 - 5.454: 99.4324% ( 1) 00:14:54.669 6.094 - 6.122: 99.4385% ( 1) 00:14:54.669 6.261 - 6.289: 99.4446% ( 1) 00:14:54.669 6.483 - 6.511: 99.4507% ( 1) 00:14:54.669 6.595 - 6.623: 99.4568% ( 1) 00:14:54.669 6.817 - 6.845: 99.4629% ( 1) 00:14:54.669 6.845 - 6.873: 99.4690% ( 1) 00:14:54.669 7.680 - 7.736: 99.4751% ( 1) 00:14:54.669 8.682 - 8.737: 99.4812% ( 1) 00:14:54.669 9.683 - 9.739: 99.4873% ( 1) 00:14:54.669 43.186 - 43.409: 99.4934% ( 1) 00:14:54.669 2991.861 - 3006.108: 99.4995% ( 1) 00:14:54.669 3020.355 - 3034.602: 99.5056% ( 1) 00:14:54.669 3034.602 - 3048.849: 99.5117% ( 1) 00:14:54.669 3989.148 - 4017.642: 99.9817% ( 77) 00:14:54.669 4986.435 - 5014.929: 100.0000% ( 3) 00:14:54.669 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:54.669 [ 00:14:54.669 { 00:14:54.669 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:54.669 "subtype": "Discovery", 00:14:54.669 "listen_addresses": [], 00:14:54.669 "allow_any_host": true, 00:14:54.669 "hosts": [] 00:14:54.669 }, 00:14:54.669 { 00:14:54.669 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:54.669 "subtype": "NVMe", 00:14:54.669 "listen_addresses": [ 00:14:54.669 { 00:14:54.669 "trtype": "VFIOUSER", 00:14:54.669 "adrfam": "IPv4", 00:14:54.669 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:54.669 "trsvcid": "0" 00:14:54.669 } 00:14:54.669 ], 00:14:54.669 "allow_any_host": true, 00:14:54.669 "hosts": [], 00:14:54.669 "serial_number": "SPDK1", 00:14:54.669 "model_number": "SPDK bdev Controller", 00:14:54.669 "max_namespaces": 32, 00:14:54.669 "min_cntlid": 1, 00:14:54.669 "max_cntlid": 65519, 00:14:54.669 "namespaces": [ 00:14:54.669 { 00:14:54.669 "nsid": 1, 00:14:54.669 "bdev_name": "Malloc1", 00:14:54.669 "name": "Malloc1", 00:14:54.669 "nguid": "4D7AFD2C1567477DB6B8ADF5124A3803", 00:14:54.669 "uuid": "4d7afd2c-1567-477d-b6b8-adf5124a3803" 00:14:54.669 }, 00:14:54.669 { 00:14:54.669 "nsid": 2, 00:14:54.669 "bdev_name": "Malloc3", 00:14:54.669 "name": "Malloc3", 00:14:54.669 "nguid": "FA3B3BCD0D374D789CD6E2ADD6FEA943", 00:14:54.669 "uuid": "fa3b3bcd-0d37-4d78-9cd6-e2add6fea943" 00:14:54.669 } 00:14:54.669 ] 00:14:54.669 }, 00:14:54.669 { 00:14:54.669 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:54.669 "subtype": "NVMe", 00:14:54.669 "listen_addresses": [ 00:14:54.669 { 00:14:54.669 "trtype": "VFIOUSER", 00:14:54.669 "adrfam": "IPv4", 00:14:54.669 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:54.669 "trsvcid": "0" 00:14:54.669 } 00:14:54.669 ], 00:14:54.669 "allow_any_host": true, 00:14:54.669 "hosts": [], 00:14:54.669 "serial_number": "SPDK2", 00:14:54.669 "model_number": "SPDK bdev Controller", 00:14:54.669 "max_namespaces": 32, 00:14:54.669 "min_cntlid": 1, 00:14:54.669 "max_cntlid": 65519, 00:14:54.669 "namespaces": [ 00:14:54.669 { 00:14:54.669 "nsid": 1, 00:14:54.669 "bdev_name": "Malloc2", 00:14:54.669 "name": "Malloc2", 00:14:54.669 "nguid": "D7915B39C2D74283962B9C04E326E07D", 00:14:54.669 "uuid": "d7915b39-c2d7-4283-962b-9c04e326e07d" 00:14:54.669 } 00:14:54.669 ] 00:14:54.669 } 00:14:54.669 ] 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1082760 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:54.669 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:54.669 [2024-11-19 09:16:55.704397] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.928 Malloc4 00:14:54.928 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:54.928 [2024-11-19 09:16:55.954303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.928 09:16:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:55.187 Asynchronous Event Request test 00:14:55.188 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.188 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.188 Registering asynchronous event callbacks... 00:14:55.188 Starting namespace attribute notice tests for all controllers... 00:14:55.188 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:55.188 aer_cb - Changed Namespace 00:14:55.188 Cleaning up... 00:14:55.188 [ 00:14:55.188 { 00:14:55.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:55.188 "subtype": "Discovery", 00:14:55.188 "listen_addresses": [], 00:14:55.188 "allow_any_host": true, 00:14:55.188 "hosts": [] 00:14:55.188 }, 00:14:55.188 { 00:14:55.188 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:55.188 "subtype": "NVMe", 00:14:55.188 "listen_addresses": [ 00:14:55.188 { 00:14:55.188 "trtype": "VFIOUSER", 00:14:55.188 "adrfam": "IPv4", 00:14:55.188 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:55.188 "trsvcid": "0" 00:14:55.188 } 00:14:55.188 ], 00:14:55.188 "allow_any_host": true, 00:14:55.188 "hosts": [], 00:14:55.188 "serial_number": "SPDK1", 00:14:55.188 "model_number": "SPDK bdev Controller", 00:14:55.188 "max_namespaces": 32, 00:14:55.188 "min_cntlid": 1, 00:14:55.188 "max_cntlid": 65519, 00:14:55.188 "namespaces": [ 00:14:55.188 { 00:14:55.188 "nsid": 1, 00:14:55.188 "bdev_name": "Malloc1", 00:14:55.188 "name": "Malloc1", 00:14:55.188 "nguid": "4D7AFD2C1567477DB6B8ADF5124A3803", 00:14:55.188 "uuid": "4d7afd2c-1567-477d-b6b8-adf5124a3803" 00:14:55.188 }, 00:14:55.188 { 00:14:55.188 "nsid": 2, 00:14:55.188 "bdev_name": "Malloc3", 00:14:55.188 "name": "Malloc3", 00:14:55.188 "nguid": "FA3B3BCD0D374D789CD6E2ADD6FEA943", 00:14:55.188 "uuid": "fa3b3bcd-0d37-4d78-9cd6-e2add6fea943" 00:14:55.188 } 00:14:55.188 ] 00:14:55.188 }, 00:14:55.188 { 00:14:55.188 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:55.188 "subtype": "NVMe", 00:14:55.188 "listen_addresses": [ 00:14:55.188 { 00:14:55.188 "trtype": "VFIOUSER", 00:14:55.188 "adrfam": "IPv4", 00:14:55.188 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:55.188 "trsvcid": "0" 00:14:55.188 } 00:14:55.188 ], 00:14:55.188 "allow_any_host": true, 00:14:55.188 "hosts": [], 00:14:55.188 "serial_number": "SPDK2", 00:14:55.188 "model_number": "SPDK bdev Controller", 00:14:55.188 "max_namespaces": 32, 00:14:55.188 "min_cntlid": 1, 00:14:55.188 "max_cntlid": 65519, 00:14:55.188 "namespaces": [ 00:14:55.188 { 00:14:55.188 "nsid": 1, 00:14:55.188 "bdev_name": "Malloc2", 00:14:55.188 "name": "Malloc2", 00:14:55.188 "nguid": "D7915B39C2D74283962B9C04E326E07D", 00:14:55.188 "uuid": "d7915b39-c2d7-4283-962b-9c04e326e07d" 00:14:55.188 }, 00:14:55.188 { 00:14:55.188 "nsid": 2, 00:14:55.188 "bdev_name": "Malloc4", 00:14:55.188 "name": "Malloc4", 00:14:55.188 "nguid": "AF86B5948690493EB088EC59A9D74BF7", 00:14:55.188 "uuid": "af86b594-8690-493e-b088-ec59a9d74bf7" 00:14:55.188 } 00:14:55.188 ] 00:14:55.188 } 00:14:55.188 ] 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1082760 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1074983 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1074983 ']' 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1074983 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1074983 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1074983' 00:14:55.188 killing process with pid 1074983 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1074983 00:14:55.188 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1074983 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1082857 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1082857' 00:14:55.448 Process pid: 1082857 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1082857 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1082857 ']' 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.448 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.707 [2024-11-19 09:16:56.530650] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:55.707 [2024-11-19 09:16:56.531548] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:14:55.707 [2024-11-19 09:16:56.531587] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.707 [2024-11-19 09:16:56.605510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.708 [2024-11-19 09:16:56.647778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.708 [2024-11-19 09:16:56.647816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.708 [2024-11-19 09:16:56.647823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.708 [2024-11-19 09:16:56.647830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.708 [2024-11-19 09:16:56.647835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.708 [2024-11-19 09:16:56.649378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.708 [2024-11-19 09:16:56.649491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.708 [2024-11-19 09:16:56.649619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.708 [2024-11-19 09:16:56.649620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.708 [2024-11-19 09:16:56.717091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:55.708 [2024-11-19 09:16:56.717449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:55.708 [2024-11-19 09:16:56.717994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:55.708 [2024-11-19 09:16:56.718382] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:55.708 [2024-11-19 09:16:56.718428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:55.708 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.708 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:55.708 09:16:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:57.087 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:57.087 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:57.087 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:57.087 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.087 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:57.087 09:16:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:57.346 Malloc1 00:14:57.346 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:57.605 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:57.605 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:57.864 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.864 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:57.864 09:16:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:58.122 Malloc2 00:14:58.122 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:58.379 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:58.379 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:58.636 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:58.636 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1082857 00:14:58.636 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1082857 ']' 00:14:58.637 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1082857 00:14:58.637 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:58.637 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:58.637 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1082857 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1082857' 00:14:58.896 killing process with pid 1082857 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1082857 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1082857 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:58.896 00:14:58.896 real 0m51.302s 00:14:58.896 user 3m18.254s 00:14:58.896 sys 0m3.477s 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:58.896 ************************************ 00:14:58.896 END TEST nvmf_vfio_user 00:14:58.896 ************************************ 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.896 09:16:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:59.156 ************************************ 00:14:59.156 START TEST nvmf_vfio_user_nvme_compliance 00:14:59.156 ************************************ 00:14:59.156 09:16:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:59.156 * Looking for test storage... 00:14:59.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:59.156 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.157 --rc genhtml_branch_coverage=1 00:14:59.157 --rc genhtml_function_coverage=1 00:14:59.157 --rc genhtml_legend=1 00:14:59.157 --rc geninfo_all_blocks=1 00:14:59.157 --rc geninfo_unexecuted_blocks=1 00:14:59.157 00:14:59.157 ' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.157 --rc genhtml_branch_coverage=1 00:14:59.157 --rc genhtml_function_coverage=1 00:14:59.157 --rc genhtml_legend=1 00:14:59.157 --rc geninfo_all_blocks=1 00:14:59.157 --rc geninfo_unexecuted_blocks=1 00:14:59.157 00:14:59.157 ' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.157 --rc genhtml_branch_coverage=1 00:14:59.157 --rc genhtml_function_coverage=1 00:14:59.157 --rc genhtml_legend=1 00:14:59.157 --rc geninfo_all_blocks=1 00:14:59.157 --rc geninfo_unexecuted_blocks=1 00:14:59.157 00:14:59.157 ' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.157 --rc genhtml_branch_coverage=1 00:14:59.157 --rc genhtml_function_coverage=1 00:14:59.157 --rc genhtml_legend=1 00:14:59.157 --rc geninfo_all_blocks=1 00:14:59.157 --rc geninfo_unexecuted_blocks=1 00:14:59.157 00:14:59.157 ' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1083613 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1083613' 00:14:59.157 Process pid: 1083613 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1083613 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 1083613 ']' 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:59.157 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:59.417 [2024-11-19 09:17:00.229958] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:14:59.417 [2024-11-19 09:17:00.230006] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.417 [2024-11-19 09:17:00.289299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:59.417 [2024-11-19 09:17:00.332069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.417 [2024-11-19 09:17:00.332104] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.417 [2024-11-19 09:17:00.332111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.417 [2024-11-19 09:17:00.332117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.417 [2024-11-19 09:17:00.332122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.417 [2024-11-19 09:17:00.333548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.417 [2024-11-19 09:17:00.333656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.417 [2024-11-19 09:17:00.333657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.417 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:59.417 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:59.417 09:17:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.795 malloc0 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.795 09:17:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:00.795 00:15:00.795 00:15:00.795 CUnit - A unit testing framework for C - Version 2.1-3 00:15:00.795 http://cunit.sourceforge.net/ 00:15:00.795 00:15:00.795 00:15:00.795 Suite: nvme_compliance 00:15:00.795 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 09:17:01.677381] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.795 [2024-11-19 09:17:01.678726] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:00.795 [2024-11-19 09:17:01.678741] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:00.795 [2024-11-19 09:17:01.678748] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:00.795 [2024-11-19 09:17:01.680400] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.795 passed 00:15:00.795 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 09:17:01.759960] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.795 [2024-11-19 09:17:01.765994] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.795 passed 00:15:00.795 Test: admin_identify_ns ...[2024-11-19 09:17:01.843823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.054 [2024-11-19 09:17:01.902960] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:01.054 [2024-11-19 09:17:01.910959] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:01.054 [2024-11-19 09:17:01.932049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.054 passed 00:15:01.054 Test: admin_get_features_mandatory_features ...[2024-11-19 09:17:02.009980] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.054 [2024-11-19 09:17:02.016017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.054 passed 00:15:01.054 Test: admin_get_features_optional_features ...[2024-11-19 09:17:02.092494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.054 [2024-11-19 09:17:02.097528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.313 passed 00:15:01.313 Test: admin_set_features_number_of_queues ...[2024-11-19 09:17:02.176315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.313 [2024-11-19 09:17:02.281050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.313 passed 00:15:01.313 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 09:17:02.357780] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.313 [2024-11-19 09:17:02.360807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.572 passed 00:15:01.572 Test: admin_get_log_page_with_lpo ...[2024-11-19 09:17:02.443673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.572 [2024-11-19 09:17:02.511965] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:01.572 [2024-11-19 09:17:02.529016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.572 passed 00:15:01.572 Test: fabric_property_get ...[2024-11-19 09:17:02.609638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.572 [2024-11-19 09:17:02.610882] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:01.572 [2024-11-19 09:17:02.612656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.831 passed 00:15:01.831 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 09:17:02.691176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.831 [2024-11-19 09:17:02.692418] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:01.831 [2024-11-19 09:17:02.694196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:01.831 passed 00:15:01.831 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 09:17:02.772983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:01.831 [2024-11-19 09:17:02.857956] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:01.831 [2024-11-19 09:17:02.873953] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:01.831 [2024-11-19 09:17:02.879040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.090 passed 00:15:02.090 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 09:17:02.955836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.090 [2024-11-19 09:17:02.957074] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:02.090 [2024-11-19 09:17:02.958859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.090 passed 00:15:02.090 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 09:17:03.038714] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.090 [2024-11-19 09:17:03.117953] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:02.090 [2024-11-19 09:17:03.141958] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:02.090 [2024-11-19 09:17:03.147028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.349 passed 00:15:02.349 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 09:17:03.220003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.349 [2024-11-19 09:17:03.221248] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:02.349 [2024-11-19 09:17:03.221273] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:02.349 [2024-11-19 09:17:03.223022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.349 passed 00:15:02.349 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 09:17:03.300866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.349 [2024-11-19 09:17:03.393954] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:02.349 [2024-11-19 09:17:03.401963] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:02.608 [2024-11-19 09:17:03.409956] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:02.608 [2024-11-19 09:17:03.417956] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:02.608 [2024-11-19 09:17:03.447029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.608 passed 00:15:02.608 Test: admin_create_io_sq_verify_pc ...[2024-11-19 09:17:03.520986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.608 [2024-11-19 09:17:03.540962] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:02.608 [2024-11-19 09:17:03.558215] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.608 passed 00:15:02.608 Test: admin_create_io_qp_max_qps ...[2024-11-19 09:17:03.636723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.986 [2024-11-19 09:17:04.731958] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:04.245 [2024-11-19 09:17:05.104938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.245 passed 00:15:04.245 Test: admin_create_io_sq_shared_cq ...[2024-11-19 09:17:05.182047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.504 [2024-11-19 09:17:05.315958] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:04.504 [2024-11-19 09:17:05.353013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.504 passed 00:15:04.504 00:15:04.504 Run Summary: Type Total Ran Passed Failed Inactive 00:15:04.504 suites 1 1 n/a 0 0 00:15:04.504 tests 18 18 18 0 0 00:15:04.504 asserts 360 360 360 0 n/a 00:15:04.504 00:15:04.504 Elapsed time = 1.513 seconds 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1083613 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 1083613 ']' 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 1083613 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1083613 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1083613' 00:15:04.504 killing process with pid 1083613 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 1083613 00:15:04.504 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 1083613 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:04.764 00:15:04.764 real 0m5.669s 00:15:04.764 user 0m15.834s 00:15:04.764 sys 0m0.523s 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:04.764 ************************************ 00:15:04.764 END TEST nvmf_vfio_user_nvme_compliance 00:15:04.764 ************************************ 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.764 ************************************ 00:15:04.764 START TEST nvmf_vfio_user_fuzz 00:15:04.764 ************************************ 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:04.764 * Looking for test storage... 00:15:04.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:04.764 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.023 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:05.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.024 --rc genhtml_branch_coverage=1 00:15:05.024 --rc genhtml_function_coverage=1 00:15:05.024 --rc genhtml_legend=1 00:15:05.024 --rc geninfo_all_blocks=1 00:15:05.024 --rc geninfo_unexecuted_blocks=1 00:15:05.024 00:15:05.024 ' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:05.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.024 --rc genhtml_branch_coverage=1 00:15:05.024 --rc genhtml_function_coverage=1 00:15:05.024 --rc genhtml_legend=1 00:15:05.024 --rc geninfo_all_blocks=1 00:15:05.024 --rc geninfo_unexecuted_blocks=1 00:15:05.024 00:15:05.024 ' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:05.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.024 --rc genhtml_branch_coverage=1 00:15:05.024 --rc genhtml_function_coverage=1 00:15:05.024 --rc genhtml_legend=1 00:15:05.024 --rc geninfo_all_blocks=1 00:15:05.024 --rc geninfo_unexecuted_blocks=1 00:15:05.024 00:15:05.024 ' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:05.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.024 --rc genhtml_branch_coverage=1 00:15:05.024 --rc genhtml_function_coverage=1 00:15:05.024 --rc genhtml_legend=1 00:15:05.024 --rc geninfo_all_blocks=1 00:15:05.024 --rc geninfo_unexecuted_blocks=1 00:15:05.024 00:15:05.024 ' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:05.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1084602 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1084602' 00:15:05.024 Process pid: 1084602 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1084602 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 1084602 ']' 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.024 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:05.025 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.025 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:05.025 09:17:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:05.311 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:05.311 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:05.311 09:17:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.355 malloc0 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:06.355 09:17:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:38.492 Fuzzing completed. Shutting down the fuzz application 00:15:38.492 00:15:38.492 Dumping successful admin opcodes: 00:15:38.492 8, 9, 10, 24, 00:15:38.492 Dumping successful io opcodes: 00:15:38.492 0, 00:15:38.492 NS: 0x20000081ef00 I/O qp, Total commands completed: 1035928, total successful commands: 4087, random_seed: 2600212544 00:15:38.492 NS: 0x20000081ef00 admin qp, Total commands completed: 257506, total successful commands: 2078, random_seed: 656565952 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1084602 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 1084602 ']' 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 1084602 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1084602 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1084602' 00:15:38.492 killing process with pid 1084602 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 1084602 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 1084602 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:38.492 00:15:38.492 real 0m32.214s 00:15:38.492 user 0m30.383s 00:15:38.492 sys 0m31.712s 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:38.492 ************************************ 00:15:38.492 END TEST nvmf_vfio_user_fuzz 00:15:38.492 ************************************ 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.492 ************************************ 00:15:38.492 START TEST nvmf_auth_target 00:15:38.492 ************************************ 00:15:38.492 09:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:38.492 * Looking for test storage... 00:15:38.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:38.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.492 --rc genhtml_branch_coverage=1 00:15:38.492 --rc genhtml_function_coverage=1 00:15:38.492 --rc genhtml_legend=1 00:15:38.492 --rc geninfo_all_blocks=1 00:15:38.492 --rc geninfo_unexecuted_blocks=1 00:15:38.492 00:15:38.492 ' 00:15:38.492 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:38.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.493 --rc genhtml_branch_coverage=1 00:15:38.493 --rc genhtml_function_coverage=1 00:15:38.493 --rc genhtml_legend=1 00:15:38.493 --rc geninfo_all_blocks=1 00:15:38.493 --rc geninfo_unexecuted_blocks=1 00:15:38.493 00:15:38.493 ' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:38.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.493 --rc genhtml_branch_coverage=1 00:15:38.493 --rc genhtml_function_coverage=1 00:15:38.493 --rc genhtml_legend=1 00:15:38.493 --rc geninfo_all_blocks=1 00:15:38.493 --rc geninfo_unexecuted_blocks=1 00:15:38.493 00:15:38.493 ' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:38.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.493 --rc genhtml_branch_coverage=1 00:15:38.493 --rc genhtml_function_coverage=1 00:15:38.493 --rc genhtml_legend=1 00:15:38.493 --rc geninfo_all_blocks=1 00:15:38.493 --rc geninfo_unexecuted_blocks=1 00:15:38.493 00:15:38.493 ' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:38.493 09:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:43.769 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:43.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:43.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:43.770 Found net devices under 0000:86:00.0: cvl_0_0 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:43.770 Found net devices under 0000:86:00.1: cvl_0_1 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:43.770 09:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:43.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:43.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:15:43.770 00:15:43.770 --- 10.0.0.2 ping statistics --- 00:15:43.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.770 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:43.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:43.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:15:43.770 00:15:43.770 --- 10.0.0.1 ping statistics --- 00:15:43.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:43.770 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.770 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1092911 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1092911 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1092911 ']' 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1092933 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7ac994027c8eb7f83f9217cd94a7bf2af2bed57f92257365 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Doo 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7ac994027c8eb7f83f9217cd94a7bf2af2bed57f92257365 0 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7ac994027c8eb7f83f9217cd94a7bf2af2bed57f92257365 0 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7ac994027c8eb7f83f9217cd94a7bf2af2bed57f92257365 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Doo 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Doo 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Doo 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8412231896a2b58eef660e4acde452beff05e953c38b3f1166c727b93a9aef14 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YZV 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8412231896a2b58eef660e4acde452beff05e953c38b3f1166c727b93a9aef14 3 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8412231896a2b58eef660e4acde452beff05e953c38b3f1166c727b93a9aef14 3 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8412231896a2b58eef660e4acde452beff05e953c38b3f1166c727b93a9aef14 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YZV 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YZV 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.YZV 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=99762a9637c27984f2dd101730360663 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2Ls 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 99762a9637c27984f2dd101730360663 1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 99762a9637c27984f2dd101730360663 1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=99762a9637c27984f2dd101730360663 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2Ls 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2Ls 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.2Ls 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=85c61dde96ff7b2f4719812d6895f74d4dc120d1311091f7 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BsQ 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 85c61dde96ff7b2f4719812d6895f74d4dc120d1311091f7 2 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 85c61dde96ff7b2f4719812d6895f74d4dc120d1311091f7 2 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=85c61dde96ff7b2f4719812d6895f74d4dc120d1311091f7 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BsQ 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BsQ 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BsQ 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:43.771 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e754750de11ec4167fb88be997d996f8d3ad2038032969c4 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pii 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e754750de11ec4167fb88be997d996f8d3ad2038032969c4 2 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e754750de11ec4167fb88be997d996f8d3ad2038032969c4 2 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e754750de11ec4167fb88be997d996f8d3ad2038032969c4 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pii 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pii 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Pii 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f064d8ada4ba117caf9cf303f441ef85 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.awq 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f064d8ada4ba117caf9cf303f441ef85 1 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f064d8ada4ba117caf9cf303f441ef85 1 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f064d8ada4ba117caf9cf303f441ef85 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.awq 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.awq 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.awq 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:43.772 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ca97af6febc2ed040f2415641ff264325ecc315490aff33d2a1737c5b694140e 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H66 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ca97af6febc2ed040f2415641ff264325ecc315490aff33d2a1737c5b694140e 3 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ca97af6febc2ed040f2415641ff264325ecc315490aff33d2a1737c5b694140e 3 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ca97af6febc2ed040f2415641ff264325ecc315490aff33d2a1737c5b694140e 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H66 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H66 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.H66 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1092911 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1092911 ']' 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:44.031 09:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1092933 /var/tmp/host.sock 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1092933 ']' 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:44.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:44.031 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Doo 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Doo 00:15:44.291 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Doo 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.YZV ]] 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YZV 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YZV 00:15:44.550 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YZV 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ls 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ls 00:15:44.808 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ls 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BsQ ]] 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BsQ 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BsQ 00:15:45.067 09:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BsQ 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pii 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Pii 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Pii 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.awq ]] 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awq 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awq 00:15:45.326 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awq 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H66 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.H66 00:15:45.585 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.H66 00:15:45.842 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:45.842 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:45.842 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.842 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.842 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.842 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.100 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.101 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.101 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.101 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.101 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.101 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.101 09:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.358 00:15:46.358 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.358 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.359 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.618 { 00:15:46.618 "cntlid": 1, 00:15:46.618 "qid": 0, 00:15:46.618 "state": "enabled", 00:15:46.618 "thread": "nvmf_tgt_poll_group_000", 00:15:46.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.618 "listen_address": { 00:15:46.618 "trtype": "TCP", 00:15:46.618 "adrfam": "IPv4", 00:15:46.618 "traddr": "10.0.0.2", 00:15:46.618 "trsvcid": "4420" 00:15:46.618 }, 00:15:46.618 "peer_address": { 00:15:46.618 "trtype": "TCP", 00:15:46.618 "adrfam": "IPv4", 00:15:46.618 "traddr": "10.0.0.1", 00:15:46.618 "trsvcid": "52768" 00:15:46.618 }, 00:15:46.618 "auth": { 00:15:46.618 "state": "completed", 00:15:46.618 "digest": "sha256", 00:15:46.618 "dhgroup": "null" 00:15:46.618 } 00:15:46.618 } 00:15:46.618 ]' 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.618 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.875 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:15:46.875 09:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.443 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.702 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.961 00:15:47.961 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.961 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.961 09:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.961 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.961 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.961 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.961 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.219 { 00:15:48.219 "cntlid": 3, 00:15:48.219 "qid": 0, 00:15:48.219 "state": "enabled", 00:15:48.219 "thread": "nvmf_tgt_poll_group_000", 00:15:48.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.219 "listen_address": { 00:15:48.219 "trtype": "TCP", 00:15:48.219 "adrfam": "IPv4", 00:15:48.219 "traddr": "10.0.0.2", 00:15:48.219 "trsvcid": "4420" 00:15:48.219 }, 00:15:48.219 "peer_address": { 00:15:48.219 "trtype": "TCP", 00:15:48.219 "adrfam": "IPv4", 00:15:48.219 "traddr": "10.0.0.1", 00:15:48.219 "trsvcid": "52804" 00:15:48.219 }, 00:15:48.219 "auth": { 00:15:48.219 "state": "completed", 00:15:48.219 "digest": "sha256", 00:15:48.219 "dhgroup": "null" 00:15:48.219 } 00:15:48.219 } 00:15:48.219 ]' 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.219 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.478 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:15:48.478 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.046 09:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.304 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.563 00:15:49.563 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.563 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.563 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.822 { 00:15:49.822 "cntlid": 5, 00:15:49.822 "qid": 0, 00:15:49.822 "state": "enabled", 00:15:49.822 "thread": "nvmf_tgt_poll_group_000", 00:15:49.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.822 "listen_address": { 00:15:49.822 "trtype": "TCP", 00:15:49.822 "adrfam": "IPv4", 00:15:49.822 "traddr": "10.0.0.2", 00:15:49.822 "trsvcid": "4420" 00:15:49.822 }, 00:15:49.822 "peer_address": { 00:15:49.822 "trtype": "TCP", 00:15:49.822 "adrfam": "IPv4", 00:15:49.822 "traddr": "10.0.0.1", 00:15:49.822 "trsvcid": "52830" 00:15:49.822 }, 00:15:49.822 "auth": { 00:15:49.822 "state": "completed", 00:15:49.822 "digest": "sha256", 00:15:49.822 "dhgroup": "null" 00:15:49.822 } 00:15:49.822 } 00:15:49.822 ]' 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.822 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.081 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:15:50.081 09:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.646 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.906 09:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.164 00:15:51.164 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.164 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.164 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.423 { 00:15:51.423 "cntlid": 7, 00:15:51.423 "qid": 0, 00:15:51.423 "state": "enabled", 00:15:51.423 "thread": "nvmf_tgt_poll_group_000", 00:15:51.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.423 "listen_address": { 00:15:51.423 "trtype": "TCP", 00:15:51.423 "adrfam": "IPv4", 00:15:51.423 "traddr": "10.0.0.2", 00:15:51.423 "trsvcid": "4420" 00:15:51.423 }, 00:15:51.423 "peer_address": { 00:15:51.423 "trtype": "TCP", 00:15:51.423 "adrfam": "IPv4", 00:15:51.423 "traddr": "10.0.0.1", 00:15:51.423 "trsvcid": "52844" 00:15:51.423 }, 00:15:51.423 "auth": { 00:15:51.423 "state": "completed", 00:15:51.423 "digest": "sha256", 00:15:51.423 "dhgroup": "null" 00:15:51.423 } 00:15:51.423 } 00:15:51.423 ]' 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.423 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.682 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:15:51.682 09:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.250 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.508 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.766 00:15:52.766 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.766 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.766 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.025 { 00:15:53.025 "cntlid": 9, 00:15:53.025 "qid": 0, 00:15:53.025 "state": "enabled", 00:15:53.025 "thread": "nvmf_tgt_poll_group_000", 00:15:53.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.025 "listen_address": { 00:15:53.025 "trtype": "TCP", 00:15:53.025 "adrfam": "IPv4", 00:15:53.025 "traddr": "10.0.0.2", 00:15:53.025 "trsvcid": "4420" 00:15:53.025 }, 00:15:53.025 "peer_address": { 00:15:53.025 "trtype": "TCP", 00:15:53.025 "adrfam": "IPv4", 00:15:53.025 "traddr": "10.0.0.1", 00:15:53.025 "trsvcid": "52876" 00:15:53.025 }, 00:15:53.025 "auth": { 00:15:53.025 "state": "completed", 00:15:53.025 "digest": "sha256", 00:15:53.025 "dhgroup": "ffdhe2048" 00:15:53.025 } 00:15:53.025 } 00:15:53.025 ]' 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.025 09:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.284 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:15:53.284 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.853 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.112 09:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.371 00:15:54.371 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.371 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.371 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.629 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.629 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.629 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.629 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.629 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.629 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.629 { 00:15:54.629 "cntlid": 11, 00:15:54.629 "qid": 0, 00:15:54.629 "state": "enabled", 00:15:54.629 "thread": "nvmf_tgt_poll_group_000", 00:15:54.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.629 "listen_address": { 00:15:54.629 "trtype": "TCP", 00:15:54.629 "adrfam": "IPv4", 00:15:54.629 "traddr": "10.0.0.2", 00:15:54.629 "trsvcid": "4420" 00:15:54.629 }, 00:15:54.629 "peer_address": { 00:15:54.629 "trtype": "TCP", 00:15:54.629 "adrfam": "IPv4", 00:15:54.629 "traddr": "10.0.0.1", 00:15:54.630 "trsvcid": "52406" 00:15:54.630 }, 00:15:54.630 "auth": { 00:15:54.630 "state": "completed", 00:15:54.630 "digest": "sha256", 00:15:54.630 "dhgroup": "ffdhe2048" 00:15:54.630 } 00:15:54.630 } 00:15:54.630 ]' 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.630 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.889 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:15:54.889 09:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.457 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.716 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.717 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.976 00:15:55.976 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.976 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.976 09:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.235 { 00:15:56.235 "cntlid": 13, 00:15:56.235 "qid": 0, 00:15:56.235 "state": "enabled", 00:15:56.235 "thread": "nvmf_tgt_poll_group_000", 00:15:56.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.235 "listen_address": { 00:15:56.235 "trtype": "TCP", 00:15:56.235 "adrfam": "IPv4", 00:15:56.235 "traddr": "10.0.0.2", 00:15:56.235 "trsvcid": "4420" 00:15:56.235 }, 00:15:56.235 "peer_address": { 00:15:56.235 "trtype": "TCP", 00:15:56.235 "adrfam": "IPv4", 00:15:56.235 "traddr": "10.0.0.1", 00:15:56.235 "trsvcid": "52428" 00:15:56.235 }, 00:15:56.235 "auth": { 00:15:56.235 "state": "completed", 00:15:56.235 "digest": "sha256", 00:15:56.235 "dhgroup": "ffdhe2048" 00:15:56.235 } 00:15:56.235 } 00:15:56.235 ]' 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.235 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.236 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.236 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.236 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.236 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.236 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.236 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.495 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:15:56.495 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.062 09:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.321 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.322 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.322 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.581 00:15:57.581 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.581 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.581 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.840 { 00:15:57.840 "cntlid": 15, 00:15:57.840 "qid": 0, 00:15:57.840 "state": "enabled", 00:15:57.840 "thread": "nvmf_tgt_poll_group_000", 00:15:57.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.840 "listen_address": { 00:15:57.840 "trtype": "TCP", 00:15:57.840 "adrfam": "IPv4", 00:15:57.840 "traddr": "10.0.0.2", 00:15:57.840 "trsvcid": "4420" 00:15:57.840 }, 00:15:57.840 "peer_address": { 00:15:57.840 "trtype": "TCP", 00:15:57.840 "adrfam": "IPv4", 00:15:57.840 "traddr": "10.0.0.1", 00:15:57.840 "trsvcid": "52454" 00:15:57.840 }, 00:15:57.840 "auth": { 00:15:57.840 "state": "completed", 00:15:57.840 "digest": "sha256", 00:15:57.840 "dhgroup": "ffdhe2048" 00:15:57.840 } 00:15:57.840 } 00:15:57.840 ]' 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.840 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.099 09:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:15:58.099 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.668 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.927 09:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.186 00:15:59.186 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.186 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.186 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.445 { 00:15:59.445 "cntlid": 17, 00:15:59.445 "qid": 0, 00:15:59.445 "state": "enabled", 00:15:59.445 "thread": "nvmf_tgt_poll_group_000", 00:15:59.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.445 "listen_address": { 00:15:59.445 "trtype": "TCP", 00:15:59.445 "adrfam": "IPv4", 00:15:59.445 "traddr": "10.0.0.2", 00:15:59.445 "trsvcid": "4420" 00:15:59.445 }, 00:15:59.445 "peer_address": { 00:15:59.445 "trtype": "TCP", 00:15:59.445 "adrfam": "IPv4", 00:15:59.445 "traddr": "10.0.0.1", 00:15:59.445 "trsvcid": "52476" 00:15:59.445 }, 00:15:59.445 "auth": { 00:15:59.445 "state": "completed", 00:15:59.445 "digest": "sha256", 00:15:59.445 "dhgroup": "ffdhe3072" 00:15:59.445 } 00:15:59.445 } 00:15:59.445 ]' 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.445 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.704 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:15:59.704 09:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.272 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.531 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.790 00:16:00.790 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.790 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.790 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.049 { 00:16:01.049 "cntlid": 19, 00:16:01.049 "qid": 0, 00:16:01.049 "state": "enabled", 00:16:01.049 "thread": "nvmf_tgt_poll_group_000", 00:16:01.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.049 "listen_address": { 00:16:01.049 "trtype": "TCP", 00:16:01.049 "adrfam": "IPv4", 00:16:01.049 "traddr": "10.0.0.2", 00:16:01.049 "trsvcid": "4420" 00:16:01.049 }, 00:16:01.049 "peer_address": { 00:16:01.049 "trtype": "TCP", 00:16:01.049 "adrfam": "IPv4", 00:16:01.049 "traddr": "10.0.0.1", 00:16:01.049 "trsvcid": "52494" 00:16:01.049 }, 00:16:01.049 "auth": { 00:16:01.049 "state": "completed", 00:16:01.049 "digest": "sha256", 00:16:01.049 "dhgroup": "ffdhe3072" 00:16:01.049 } 00:16:01.049 } 00:16:01.049 ]' 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.049 09:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.049 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.049 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.050 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.308 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:01.308 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:01.874 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.875 09:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.133 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.392 00:16:02.392 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.392 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.392 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.651 { 00:16:02.651 "cntlid": 21, 00:16:02.651 "qid": 0, 00:16:02.651 "state": "enabled", 00:16:02.651 "thread": "nvmf_tgt_poll_group_000", 00:16:02.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.651 "listen_address": { 00:16:02.651 "trtype": "TCP", 00:16:02.651 "adrfam": "IPv4", 00:16:02.651 "traddr": "10.0.0.2", 00:16:02.651 "trsvcid": "4420" 00:16:02.651 }, 00:16:02.651 "peer_address": { 00:16:02.651 "trtype": "TCP", 00:16:02.651 "adrfam": "IPv4", 00:16:02.651 "traddr": "10.0.0.1", 00:16:02.651 "trsvcid": "52518" 00:16:02.651 }, 00:16:02.651 "auth": { 00:16:02.651 "state": "completed", 00:16:02.651 "digest": "sha256", 00:16:02.651 "dhgroup": "ffdhe3072" 00:16:02.651 } 00:16:02.651 } 00:16:02.651 ]' 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.651 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.909 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:02.910 09:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.477 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.737 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:03.996 00:16:03.996 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.996 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.996 09:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.255 { 00:16:04.255 "cntlid": 23, 00:16:04.255 "qid": 0, 00:16:04.255 "state": "enabled", 00:16:04.255 "thread": "nvmf_tgt_poll_group_000", 00:16:04.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.255 "listen_address": { 00:16:04.255 "trtype": "TCP", 00:16:04.255 "adrfam": "IPv4", 00:16:04.255 "traddr": "10.0.0.2", 00:16:04.255 "trsvcid": "4420" 00:16:04.255 }, 00:16:04.255 "peer_address": { 00:16:04.255 "trtype": "TCP", 00:16:04.255 "adrfam": "IPv4", 00:16:04.255 "traddr": "10.0.0.1", 00:16:04.255 "trsvcid": "43404" 00:16:04.255 }, 00:16:04.255 "auth": { 00:16:04.255 "state": "completed", 00:16:04.255 "digest": "sha256", 00:16:04.255 "dhgroup": "ffdhe3072" 00:16:04.255 } 00:16:04.255 } 00:16:04.255 ]' 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.255 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.515 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:04.515 09:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:05.082 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.082 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.082 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.082 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.082 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.083 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.083 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.083 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.083 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.342 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.600 00:16:05.600 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.600 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.600 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.858 { 00:16:05.858 "cntlid": 25, 00:16:05.858 "qid": 0, 00:16:05.858 "state": "enabled", 00:16:05.858 "thread": "nvmf_tgt_poll_group_000", 00:16:05.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.858 "listen_address": { 00:16:05.858 "trtype": "TCP", 00:16:05.858 "adrfam": "IPv4", 00:16:05.858 "traddr": "10.0.0.2", 00:16:05.858 "trsvcid": "4420" 00:16:05.858 }, 00:16:05.858 "peer_address": { 00:16:05.858 "trtype": "TCP", 00:16:05.858 "adrfam": "IPv4", 00:16:05.858 "traddr": "10.0.0.1", 00:16:05.858 "trsvcid": "43432" 00:16:05.858 }, 00:16:05.858 "auth": { 00:16:05.858 "state": "completed", 00:16:05.858 "digest": "sha256", 00:16:05.858 "dhgroup": "ffdhe4096" 00:16:05.858 } 00:16:05.858 } 00:16:05.858 ]' 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.858 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.859 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.117 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.117 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.117 09:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.117 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:06.117 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.710 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.969 09:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.227 00:16:07.227 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.227 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.227 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.485 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.485 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.485 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.485 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.485 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.485 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.485 { 00:16:07.485 "cntlid": 27, 00:16:07.485 "qid": 0, 00:16:07.486 "state": "enabled", 00:16:07.486 "thread": "nvmf_tgt_poll_group_000", 00:16:07.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.486 "listen_address": { 00:16:07.486 "trtype": "TCP", 00:16:07.486 "adrfam": "IPv4", 00:16:07.486 "traddr": "10.0.0.2", 00:16:07.486 "trsvcid": "4420" 00:16:07.486 }, 00:16:07.486 "peer_address": { 00:16:07.486 "trtype": "TCP", 00:16:07.486 "adrfam": "IPv4", 00:16:07.486 "traddr": "10.0.0.1", 00:16:07.486 "trsvcid": "43464" 00:16:07.486 }, 00:16:07.486 "auth": { 00:16:07.486 "state": "completed", 00:16:07.486 "digest": "sha256", 00:16:07.486 "dhgroup": "ffdhe4096" 00:16:07.486 } 00:16:07.486 } 00:16:07.486 ]' 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.486 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.744 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:07.744 09:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.311 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.312 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.573 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.832 00:16:08.832 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.832 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.832 09:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.091 { 00:16:09.091 "cntlid": 29, 00:16:09.091 "qid": 0, 00:16:09.091 "state": "enabled", 00:16:09.091 "thread": "nvmf_tgt_poll_group_000", 00:16:09.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.091 "listen_address": { 00:16:09.091 "trtype": "TCP", 00:16:09.091 "adrfam": "IPv4", 00:16:09.091 "traddr": "10.0.0.2", 00:16:09.091 "trsvcid": "4420" 00:16:09.091 }, 00:16:09.091 "peer_address": { 00:16:09.091 "trtype": "TCP", 00:16:09.091 "adrfam": "IPv4", 00:16:09.091 "traddr": "10.0.0.1", 00:16:09.091 "trsvcid": "43498" 00:16:09.091 }, 00:16:09.091 "auth": { 00:16:09.091 "state": "completed", 00:16:09.091 "digest": "sha256", 00:16:09.091 "dhgroup": "ffdhe4096" 00:16:09.091 } 00:16:09.091 } 00:16:09.091 ]' 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.091 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.349 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.349 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.349 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.350 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:09.350 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:09.917 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.917 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.917 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.917 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.176 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.176 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.176 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.176 09:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.176 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:10.435 00:16:10.435 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.435 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.435 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.694 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.694 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.694 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.694 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.694 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.694 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.694 { 00:16:10.694 "cntlid": 31, 00:16:10.694 "qid": 0, 00:16:10.694 "state": "enabled", 00:16:10.694 "thread": "nvmf_tgt_poll_group_000", 00:16:10.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.694 "listen_address": { 00:16:10.694 "trtype": "TCP", 00:16:10.694 "adrfam": "IPv4", 00:16:10.694 "traddr": "10.0.0.2", 00:16:10.694 "trsvcid": "4420" 00:16:10.694 }, 00:16:10.694 "peer_address": { 00:16:10.694 "trtype": "TCP", 00:16:10.694 "adrfam": "IPv4", 00:16:10.694 "traddr": "10.0.0.1", 00:16:10.694 "trsvcid": "43504" 00:16:10.694 }, 00:16:10.694 "auth": { 00:16:10.694 "state": "completed", 00:16:10.694 "digest": "sha256", 00:16:10.694 "dhgroup": "ffdhe4096" 00:16:10.694 } 00:16:10.694 } 00:16:10.694 ]' 00:16:10.695 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.695 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.695 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.954 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.954 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.954 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.954 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.954 09:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.954 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:10.954 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.522 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.781 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.782 09:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.350 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.350 { 00:16:12.350 "cntlid": 33, 00:16:12.350 "qid": 0, 00:16:12.350 "state": "enabled", 00:16:12.350 "thread": "nvmf_tgt_poll_group_000", 00:16:12.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.350 "listen_address": { 00:16:12.350 "trtype": "TCP", 00:16:12.350 "adrfam": "IPv4", 00:16:12.350 "traddr": "10.0.0.2", 00:16:12.350 "trsvcid": "4420" 00:16:12.350 }, 00:16:12.350 "peer_address": { 00:16:12.350 "trtype": "TCP", 00:16:12.350 "adrfam": "IPv4", 00:16:12.350 "traddr": "10.0.0.1", 00:16:12.350 "trsvcid": "43528" 00:16:12.350 }, 00:16:12.350 "auth": { 00:16:12.350 "state": "completed", 00:16:12.350 "digest": "sha256", 00:16:12.350 "dhgroup": "ffdhe6144" 00:16:12.350 } 00:16:12.350 } 00:16:12.350 ]' 00:16:12.350 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.609 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.868 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:12.868 09:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.435 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.694 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.954 00:16:13.954 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.954 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.954 09:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.213 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.213 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.213 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.213 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.213 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.213 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.213 { 00:16:14.213 "cntlid": 35, 00:16:14.213 "qid": 0, 00:16:14.213 "state": "enabled", 00:16:14.213 "thread": "nvmf_tgt_poll_group_000", 00:16:14.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.213 "listen_address": { 00:16:14.214 "trtype": "TCP", 00:16:14.214 "adrfam": "IPv4", 00:16:14.214 "traddr": "10.0.0.2", 00:16:14.214 "trsvcid": "4420" 00:16:14.214 }, 00:16:14.214 "peer_address": { 00:16:14.214 "trtype": "TCP", 00:16:14.214 "adrfam": "IPv4", 00:16:14.214 "traddr": "10.0.0.1", 00:16:14.214 "trsvcid": "44540" 00:16:14.214 }, 00:16:14.214 "auth": { 00:16:14.214 "state": "completed", 00:16:14.214 "digest": "sha256", 00:16:14.214 "dhgroup": "ffdhe6144" 00:16:14.214 } 00:16:14.214 } 00:16:14.214 ]' 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.214 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.473 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:14.473 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:15.041 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.041 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.041 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.041 09:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.041 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.041 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.041 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.041 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.300 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.301 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.301 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.301 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.560 00:16:15.560 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.560 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.560 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.819 { 00:16:15.819 "cntlid": 37, 00:16:15.819 "qid": 0, 00:16:15.819 "state": "enabled", 00:16:15.819 "thread": "nvmf_tgt_poll_group_000", 00:16:15.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.819 "listen_address": { 00:16:15.819 "trtype": "TCP", 00:16:15.819 "adrfam": "IPv4", 00:16:15.819 "traddr": "10.0.0.2", 00:16:15.819 "trsvcid": "4420" 00:16:15.819 }, 00:16:15.819 "peer_address": { 00:16:15.819 "trtype": "TCP", 00:16:15.819 "adrfam": "IPv4", 00:16:15.819 "traddr": "10.0.0.1", 00:16:15.819 "trsvcid": "44568" 00:16:15.819 }, 00:16:15.819 "auth": { 00:16:15.819 "state": "completed", 00:16:15.819 "digest": "sha256", 00:16:15.819 "dhgroup": "ffdhe6144" 00:16:15.819 } 00:16:15.819 } 00:16:15.819 ]' 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.819 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.078 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.078 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.078 09:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.078 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:16.078 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.643 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.902 09:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:17.466 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.466 { 00:16:17.466 "cntlid": 39, 00:16:17.466 "qid": 0, 00:16:17.466 "state": "enabled", 00:16:17.466 "thread": "nvmf_tgt_poll_group_000", 00:16:17.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.466 "listen_address": { 00:16:17.466 "trtype": "TCP", 00:16:17.466 "adrfam": "IPv4", 00:16:17.466 "traddr": "10.0.0.2", 00:16:17.466 "trsvcid": "4420" 00:16:17.466 }, 00:16:17.466 "peer_address": { 00:16:17.466 "trtype": "TCP", 00:16:17.466 "adrfam": "IPv4", 00:16:17.466 "traddr": "10.0.0.1", 00:16:17.466 "trsvcid": "44576" 00:16:17.466 }, 00:16:17.466 "auth": { 00:16:17.466 "state": "completed", 00:16:17.466 "digest": "sha256", 00:16:17.466 "dhgroup": "ffdhe6144" 00:16:17.466 } 00:16:17.466 } 00:16:17.466 ]' 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.466 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.723 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.723 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.723 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.723 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.723 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.980 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:17.980 09:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.545 09:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.112 00:16:19.112 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.112 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.112 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.370 { 00:16:19.370 "cntlid": 41, 00:16:19.370 "qid": 0, 00:16:19.370 "state": "enabled", 00:16:19.370 "thread": "nvmf_tgt_poll_group_000", 00:16:19.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.370 "listen_address": { 00:16:19.370 "trtype": "TCP", 00:16:19.370 "adrfam": "IPv4", 00:16:19.370 "traddr": "10.0.0.2", 00:16:19.370 "trsvcid": "4420" 00:16:19.370 }, 00:16:19.370 "peer_address": { 00:16:19.370 "trtype": "TCP", 00:16:19.370 "adrfam": "IPv4", 00:16:19.370 "traddr": "10.0.0.1", 00:16:19.370 "trsvcid": "44618" 00:16:19.370 }, 00:16:19.370 "auth": { 00:16:19.370 "state": "completed", 00:16:19.370 "digest": "sha256", 00:16:19.370 "dhgroup": "ffdhe8192" 00:16:19.370 } 00:16:19.370 } 00:16:19.370 ]' 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.370 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.629 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.629 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.629 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.629 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:19.629 09:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.195 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.454 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.021 00:16:21.021 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.022 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.022 09:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.280 { 00:16:21.280 "cntlid": 43, 00:16:21.280 "qid": 0, 00:16:21.280 "state": "enabled", 00:16:21.280 "thread": "nvmf_tgt_poll_group_000", 00:16:21.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.280 "listen_address": { 00:16:21.280 "trtype": "TCP", 00:16:21.280 "adrfam": "IPv4", 00:16:21.280 "traddr": "10.0.0.2", 00:16:21.280 "trsvcid": "4420" 00:16:21.280 }, 00:16:21.280 "peer_address": { 00:16:21.280 "trtype": "TCP", 00:16:21.280 "adrfam": "IPv4", 00:16:21.280 "traddr": "10.0.0.1", 00:16:21.280 "trsvcid": "44642" 00:16:21.280 }, 00:16:21.280 "auth": { 00:16:21.280 "state": "completed", 00:16:21.280 "digest": "sha256", 00:16:21.280 "dhgroup": "ffdhe8192" 00:16:21.280 } 00:16:21.280 } 00:16:21.280 ]' 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.280 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.539 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:21.539 09:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.107 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.366 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.933 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.933 { 00:16:22.933 "cntlid": 45, 00:16:22.933 "qid": 0, 00:16:22.933 "state": "enabled", 00:16:22.933 "thread": "nvmf_tgt_poll_group_000", 00:16:22.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.933 "listen_address": { 00:16:22.933 "trtype": "TCP", 00:16:22.933 "adrfam": "IPv4", 00:16:22.933 "traddr": "10.0.0.2", 00:16:22.933 "trsvcid": "4420" 00:16:22.933 }, 00:16:22.933 "peer_address": { 00:16:22.933 "trtype": "TCP", 00:16:22.933 "adrfam": "IPv4", 00:16:22.933 "traddr": "10.0.0.1", 00:16:22.933 "trsvcid": "44668" 00:16:22.933 }, 00:16:22.933 "auth": { 00:16:22.933 "state": "completed", 00:16:22.933 "digest": "sha256", 00:16:22.933 "dhgroup": "ffdhe8192" 00:16:22.933 } 00:16:22.933 } 00:16:22.933 ]' 00:16:22.933 09:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.192 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.450 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:23.450 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.017 09:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.017 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.584 00:16:24.584 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.584 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.584 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.843 { 00:16:24.843 "cntlid": 47, 00:16:24.843 "qid": 0, 00:16:24.843 "state": "enabled", 00:16:24.843 "thread": "nvmf_tgt_poll_group_000", 00:16:24.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.843 "listen_address": { 00:16:24.843 "trtype": "TCP", 00:16:24.843 "adrfam": "IPv4", 00:16:24.843 "traddr": "10.0.0.2", 00:16:24.843 "trsvcid": "4420" 00:16:24.843 }, 00:16:24.843 "peer_address": { 00:16:24.843 "trtype": "TCP", 00:16:24.843 "adrfam": "IPv4", 00:16:24.843 "traddr": "10.0.0.1", 00:16:24.843 "trsvcid": "55486" 00:16:24.843 }, 00:16:24.843 "auth": { 00:16:24.843 "state": "completed", 00:16:24.843 "digest": "sha256", 00:16:24.843 "dhgroup": "ffdhe8192" 00:16:24.843 } 00:16:24.843 } 00:16:24.843 ]' 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.843 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.102 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.102 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.102 09:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.102 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:25.102 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.669 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.928 09:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.187 00:16:26.187 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.187 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.187 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.446 { 00:16:26.446 "cntlid": 49, 00:16:26.446 "qid": 0, 00:16:26.446 "state": "enabled", 00:16:26.446 "thread": "nvmf_tgt_poll_group_000", 00:16:26.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.446 "listen_address": { 00:16:26.446 "trtype": "TCP", 00:16:26.446 "adrfam": "IPv4", 00:16:26.446 "traddr": "10.0.0.2", 00:16:26.446 "trsvcid": "4420" 00:16:26.446 }, 00:16:26.446 "peer_address": { 00:16:26.446 "trtype": "TCP", 00:16:26.446 "adrfam": "IPv4", 00:16:26.446 "traddr": "10.0.0.1", 00:16:26.446 "trsvcid": "55518" 00:16:26.446 }, 00:16:26.446 "auth": { 00:16:26.446 "state": "completed", 00:16:26.446 "digest": "sha384", 00:16:26.446 "dhgroup": "null" 00:16:26.446 } 00:16:26.446 } 00:16:26.446 ]' 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.446 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.705 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:26.705 09:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.272 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.531 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.789 00:16:27.789 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.789 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.789 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.047 { 00:16:28.047 "cntlid": 51, 00:16:28.047 "qid": 0, 00:16:28.047 "state": "enabled", 00:16:28.047 "thread": "nvmf_tgt_poll_group_000", 00:16:28.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.047 "listen_address": { 00:16:28.047 "trtype": "TCP", 00:16:28.047 "adrfam": "IPv4", 00:16:28.047 "traddr": "10.0.0.2", 00:16:28.047 "trsvcid": "4420" 00:16:28.047 }, 00:16:28.047 "peer_address": { 00:16:28.047 "trtype": "TCP", 00:16:28.047 "adrfam": "IPv4", 00:16:28.047 "traddr": "10.0.0.1", 00:16:28.047 "trsvcid": "55544" 00:16:28.047 }, 00:16:28.047 "auth": { 00:16:28.047 "state": "completed", 00:16:28.047 "digest": "sha384", 00:16:28.047 "dhgroup": "null" 00:16:28.047 } 00:16:28.047 } 00:16:28.047 ]' 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.047 09:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.047 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.047 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.047 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.047 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.047 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.305 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:28.305 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.871 09:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.129 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.130 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.130 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.130 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.389 00:16:29.389 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.389 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.389 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.648 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.648 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.648 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.648 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.648 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.648 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.648 { 00:16:29.648 "cntlid": 53, 00:16:29.648 "qid": 0, 00:16:29.649 "state": "enabled", 00:16:29.649 "thread": "nvmf_tgt_poll_group_000", 00:16:29.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.649 "listen_address": { 00:16:29.649 "trtype": "TCP", 00:16:29.649 "adrfam": "IPv4", 00:16:29.649 "traddr": "10.0.0.2", 00:16:29.649 "trsvcid": "4420" 00:16:29.649 }, 00:16:29.649 "peer_address": { 00:16:29.649 "trtype": "TCP", 00:16:29.649 "adrfam": "IPv4", 00:16:29.649 "traddr": "10.0.0.1", 00:16:29.649 "trsvcid": "55572" 00:16:29.649 }, 00:16:29.649 "auth": { 00:16:29.649 "state": "completed", 00:16:29.649 "digest": "sha384", 00:16:29.649 "dhgroup": "null" 00:16:29.649 } 00:16:29.649 } 00:16:29.649 ]' 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.649 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.908 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:29.908 09:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.475 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.733 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.991 00:16:30.991 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.991 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.991 09:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.249 { 00:16:31.249 "cntlid": 55, 00:16:31.249 "qid": 0, 00:16:31.249 "state": "enabled", 00:16:31.249 "thread": "nvmf_tgt_poll_group_000", 00:16:31.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.249 "listen_address": { 00:16:31.249 "trtype": "TCP", 00:16:31.249 "adrfam": "IPv4", 00:16:31.249 "traddr": "10.0.0.2", 00:16:31.249 "trsvcid": "4420" 00:16:31.249 }, 00:16:31.249 "peer_address": { 00:16:31.249 "trtype": "TCP", 00:16:31.249 "adrfam": "IPv4", 00:16:31.249 "traddr": "10.0.0.1", 00:16:31.249 "trsvcid": "55580" 00:16:31.249 }, 00:16:31.249 "auth": { 00:16:31.249 "state": "completed", 00:16:31.249 "digest": "sha384", 00:16:31.249 "dhgroup": "null" 00:16:31.249 } 00:16:31.249 } 00:16:31.249 ]' 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.249 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.510 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:31.510 09:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.161 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.476 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.476 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.734 { 00:16:32.734 "cntlid": 57, 00:16:32.734 "qid": 0, 00:16:32.734 "state": "enabled", 00:16:32.734 "thread": "nvmf_tgt_poll_group_000", 00:16:32.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.734 "listen_address": { 00:16:32.734 "trtype": "TCP", 00:16:32.734 "adrfam": "IPv4", 00:16:32.734 "traddr": "10.0.0.2", 00:16:32.734 "trsvcid": "4420" 00:16:32.734 }, 00:16:32.734 "peer_address": { 00:16:32.734 "trtype": "TCP", 00:16:32.734 "adrfam": "IPv4", 00:16:32.734 "traddr": "10.0.0.1", 00:16:32.734 "trsvcid": "55602" 00:16:32.734 }, 00:16:32.734 "auth": { 00:16:32.734 "state": "completed", 00:16:32.734 "digest": "sha384", 00:16:32.734 "dhgroup": "ffdhe2048" 00:16:32.734 } 00:16:32.734 } 00:16:32.734 ]' 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.734 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.993 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.993 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.993 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.993 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.993 09:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.251 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:33.251 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:33.818 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.818 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.818 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.818 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.818 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.818 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.819 09:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.076 00:16:34.076 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.076 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.076 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.334 { 00:16:34.334 "cntlid": 59, 00:16:34.334 "qid": 0, 00:16:34.334 "state": "enabled", 00:16:34.334 "thread": "nvmf_tgt_poll_group_000", 00:16:34.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.334 "listen_address": { 00:16:34.334 "trtype": "TCP", 00:16:34.334 "adrfam": "IPv4", 00:16:34.334 "traddr": "10.0.0.2", 00:16:34.334 "trsvcid": "4420" 00:16:34.334 }, 00:16:34.334 "peer_address": { 00:16:34.334 "trtype": "TCP", 00:16:34.334 "adrfam": "IPv4", 00:16:34.334 "traddr": "10.0.0.1", 00:16:34.334 "trsvcid": "47994" 00:16:34.334 }, 00:16:34.334 "auth": { 00:16:34.334 "state": "completed", 00:16:34.334 "digest": "sha384", 00:16:34.334 "dhgroup": "ffdhe2048" 00:16:34.334 } 00:16:34.334 } 00:16:34.334 ]' 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.334 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.589 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.589 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.589 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.589 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.589 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.847 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:34.847 09:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.414 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.671 00:16:35.671 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.671 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.671 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.930 { 00:16:35.930 "cntlid": 61, 00:16:35.930 "qid": 0, 00:16:35.930 "state": "enabled", 00:16:35.930 "thread": "nvmf_tgt_poll_group_000", 00:16:35.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.930 "listen_address": { 00:16:35.930 "trtype": "TCP", 00:16:35.930 "adrfam": "IPv4", 00:16:35.930 "traddr": "10.0.0.2", 00:16:35.930 "trsvcid": "4420" 00:16:35.930 }, 00:16:35.930 "peer_address": { 00:16:35.930 "trtype": "TCP", 00:16:35.930 "adrfam": "IPv4", 00:16:35.930 "traddr": "10.0.0.1", 00:16:35.930 "trsvcid": "48030" 00:16:35.930 }, 00:16:35.930 "auth": { 00:16:35.930 "state": "completed", 00:16:35.930 "digest": "sha384", 00:16:35.930 "dhgroup": "ffdhe2048" 00:16:35.930 } 00:16:35.930 } 00:16:35.930 ]' 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.930 09:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.189 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.189 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.189 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.189 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.189 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.450 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:36.450 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.021 09:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.021 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.279 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.279 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.279 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.279 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.279 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.537 { 00:16:37.537 "cntlid": 63, 00:16:37.537 "qid": 0, 00:16:37.537 "state": "enabled", 00:16:37.537 "thread": "nvmf_tgt_poll_group_000", 00:16:37.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.537 "listen_address": { 00:16:37.537 "trtype": "TCP", 00:16:37.537 "adrfam": "IPv4", 00:16:37.537 "traddr": "10.0.0.2", 00:16:37.537 "trsvcid": "4420" 00:16:37.537 }, 00:16:37.537 "peer_address": { 00:16:37.537 "trtype": "TCP", 00:16:37.537 "adrfam": "IPv4", 00:16:37.537 "traddr": "10.0.0.1", 00:16:37.537 "trsvcid": "48060" 00:16:37.537 }, 00:16:37.537 "auth": { 00:16:37.537 "state": "completed", 00:16:37.537 "digest": "sha384", 00:16:37.537 "dhgroup": "ffdhe2048" 00:16:37.537 } 00:16:37.537 } 00:16:37.537 ]' 00:16:37.537 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.793 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.793 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.793 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.793 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.793 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.794 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.794 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.051 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:38.051 09:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.624 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.881 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.881 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.881 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.881 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.881 00:16:39.139 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.139 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.139 09:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.139 { 00:16:39.139 "cntlid": 65, 00:16:39.139 "qid": 0, 00:16:39.139 "state": "enabled", 00:16:39.139 "thread": "nvmf_tgt_poll_group_000", 00:16:39.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.139 "listen_address": { 00:16:39.139 "trtype": "TCP", 00:16:39.139 "adrfam": "IPv4", 00:16:39.139 "traddr": "10.0.0.2", 00:16:39.139 "trsvcid": "4420" 00:16:39.139 }, 00:16:39.139 "peer_address": { 00:16:39.139 "trtype": "TCP", 00:16:39.139 "adrfam": "IPv4", 00:16:39.139 "traddr": "10.0.0.1", 00:16:39.139 "trsvcid": "48088" 00:16:39.139 }, 00:16:39.139 "auth": { 00:16:39.139 "state": "completed", 00:16:39.139 "digest": "sha384", 00:16:39.139 "dhgroup": "ffdhe3072" 00:16:39.139 } 00:16:39.139 } 00:16:39.139 ]' 00:16:39.139 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.397 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.655 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:39.655 09:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.222 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.480 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.739 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.739 { 00:16:40.739 "cntlid": 67, 00:16:40.739 "qid": 0, 00:16:40.739 "state": "enabled", 00:16:40.739 "thread": "nvmf_tgt_poll_group_000", 00:16:40.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.739 "listen_address": { 00:16:40.739 "trtype": "TCP", 00:16:40.739 "adrfam": "IPv4", 00:16:40.739 "traddr": "10.0.0.2", 00:16:40.739 "trsvcid": "4420" 00:16:40.739 }, 00:16:40.739 "peer_address": { 00:16:40.739 "trtype": "TCP", 00:16:40.739 "adrfam": "IPv4", 00:16:40.739 "traddr": "10.0.0.1", 00:16:40.739 "trsvcid": "48120" 00:16:40.739 }, 00:16:40.739 "auth": { 00:16:40.739 "state": "completed", 00:16:40.739 "digest": "sha384", 00:16:40.739 "dhgroup": "ffdhe3072" 00:16:40.739 } 00:16:40.739 } 00:16:40.739 ]' 00:16:40.739 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.996 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.996 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.996 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.997 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.997 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.997 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.997 09:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.255 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:41.256 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.822 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.079 09:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.338 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.338 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.338 { 00:16:42.338 "cntlid": 69, 00:16:42.338 "qid": 0, 00:16:42.338 "state": "enabled", 00:16:42.338 "thread": "nvmf_tgt_poll_group_000", 00:16:42.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.338 "listen_address": { 00:16:42.338 "trtype": "TCP", 00:16:42.338 "adrfam": "IPv4", 00:16:42.338 "traddr": "10.0.0.2", 00:16:42.338 "trsvcid": "4420" 00:16:42.338 }, 00:16:42.338 "peer_address": { 00:16:42.339 "trtype": "TCP", 00:16:42.339 "adrfam": "IPv4", 00:16:42.339 "traddr": "10.0.0.1", 00:16:42.339 "trsvcid": "48142" 00:16:42.339 }, 00:16:42.339 "auth": { 00:16:42.339 "state": "completed", 00:16:42.339 "digest": "sha384", 00:16:42.339 "dhgroup": "ffdhe3072" 00:16:42.339 } 00:16:42.339 } 00:16:42.339 ]' 00:16:42.339 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.597 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.855 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:42.855 09:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.420 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.676 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.933 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.933 { 00:16:43.933 "cntlid": 71, 00:16:43.933 "qid": 0, 00:16:43.933 "state": "enabled", 00:16:43.933 "thread": "nvmf_tgt_poll_group_000", 00:16:43.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.933 "listen_address": { 00:16:43.933 "trtype": "TCP", 00:16:43.933 "adrfam": "IPv4", 00:16:43.933 "traddr": "10.0.0.2", 00:16:43.933 "trsvcid": "4420" 00:16:43.933 }, 00:16:43.933 "peer_address": { 00:16:43.933 "trtype": "TCP", 00:16:43.933 "adrfam": "IPv4", 00:16:43.933 "traddr": "10.0.0.1", 00:16:43.933 "trsvcid": "38930" 00:16:43.933 }, 00:16:43.933 "auth": { 00:16:43.933 "state": "completed", 00:16:43.933 "digest": "sha384", 00:16:43.933 "dhgroup": "ffdhe3072" 00:16:43.933 } 00:16:43.933 } 00:16:43.933 ]' 00:16:43.933 09:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.191 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.448 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:44.448 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.011 09:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.269 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.527 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.527 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.785 { 00:16:45.785 "cntlid": 73, 00:16:45.785 "qid": 0, 00:16:45.785 "state": "enabled", 00:16:45.785 "thread": "nvmf_tgt_poll_group_000", 00:16:45.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.785 "listen_address": { 00:16:45.785 "trtype": "TCP", 00:16:45.785 "adrfam": "IPv4", 00:16:45.785 "traddr": "10.0.0.2", 00:16:45.785 "trsvcid": "4420" 00:16:45.785 }, 00:16:45.785 "peer_address": { 00:16:45.785 "trtype": "TCP", 00:16:45.785 "adrfam": "IPv4", 00:16:45.785 "traddr": "10.0.0.1", 00:16:45.785 "trsvcid": "38962" 00:16:45.785 }, 00:16:45.785 "auth": { 00:16:45.785 "state": "completed", 00:16:45.785 "digest": "sha384", 00:16:45.785 "dhgroup": "ffdhe4096" 00:16:45.785 } 00:16:45.785 } 00:16:45.785 ]' 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.785 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.043 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:46.043 09:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.610 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.869 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.127 00:16:47.127 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.127 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.127 09:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.386 { 00:16:47.386 "cntlid": 75, 00:16:47.386 "qid": 0, 00:16:47.386 "state": "enabled", 00:16:47.386 "thread": "nvmf_tgt_poll_group_000", 00:16:47.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.386 "listen_address": { 00:16:47.386 "trtype": "TCP", 00:16:47.386 "adrfam": "IPv4", 00:16:47.386 "traddr": "10.0.0.2", 00:16:47.386 "trsvcid": "4420" 00:16:47.386 }, 00:16:47.386 "peer_address": { 00:16:47.386 "trtype": "TCP", 00:16:47.386 "adrfam": "IPv4", 00:16:47.386 "traddr": "10.0.0.1", 00:16:47.386 "trsvcid": "38990" 00:16:47.386 }, 00:16:47.386 "auth": { 00:16:47.386 "state": "completed", 00:16:47.386 "digest": "sha384", 00:16:47.386 "dhgroup": "ffdhe4096" 00:16:47.386 } 00:16:47.386 } 00:16:47.386 ]' 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.386 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.647 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:47.647 09:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.215 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.474 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.733 00:16:48.733 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.733 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.733 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.991 { 00:16:48.991 "cntlid": 77, 00:16:48.991 "qid": 0, 00:16:48.991 "state": "enabled", 00:16:48.991 "thread": "nvmf_tgt_poll_group_000", 00:16:48.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.991 "listen_address": { 00:16:48.991 "trtype": "TCP", 00:16:48.991 "adrfam": "IPv4", 00:16:48.991 "traddr": "10.0.0.2", 00:16:48.991 "trsvcid": "4420" 00:16:48.991 }, 00:16:48.991 "peer_address": { 00:16:48.991 "trtype": "TCP", 00:16:48.991 "adrfam": "IPv4", 00:16:48.991 "traddr": "10.0.0.1", 00:16:48.991 "trsvcid": "39004" 00:16:48.991 }, 00:16:48.991 "auth": { 00:16:48.991 "state": "completed", 00:16:48.991 "digest": "sha384", 00:16:48.991 "dhgroup": "ffdhe4096" 00:16:48.991 } 00:16:48.991 } 00:16:48.991 ]' 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.991 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.992 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.992 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.992 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.992 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.992 09:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.250 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:49.250 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.815 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.071 09:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.329 00:16:50.329 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.329 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.329 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.586 { 00:16:50.586 "cntlid": 79, 00:16:50.586 "qid": 0, 00:16:50.586 "state": "enabled", 00:16:50.586 "thread": "nvmf_tgt_poll_group_000", 00:16:50.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.586 "listen_address": { 00:16:50.586 "trtype": "TCP", 00:16:50.586 "adrfam": "IPv4", 00:16:50.586 "traddr": "10.0.0.2", 00:16:50.586 "trsvcid": "4420" 00:16:50.586 }, 00:16:50.586 "peer_address": { 00:16:50.586 "trtype": "TCP", 00:16:50.586 "adrfam": "IPv4", 00:16:50.586 "traddr": "10.0.0.1", 00:16:50.586 "trsvcid": "39042" 00:16:50.586 }, 00:16:50.586 "auth": { 00:16:50.586 "state": "completed", 00:16:50.586 "digest": "sha384", 00:16:50.586 "dhgroup": "ffdhe4096" 00:16:50.586 } 00:16:50.586 } 00:16:50.586 ]' 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.586 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.844 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:50.844 09:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.410 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.669 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.928 00:16:51.928 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.928 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.928 09:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.187 { 00:16:52.187 "cntlid": 81, 00:16:52.187 "qid": 0, 00:16:52.187 "state": "enabled", 00:16:52.187 "thread": "nvmf_tgt_poll_group_000", 00:16:52.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.187 "listen_address": { 00:16:52.187 "trtype": "TCP", 00:16:52.187 "adrfam": "IPv4", 00:16:52.187 "traddr": "10.0.0.2", 00:16:52.187 "trsvcid": "4420" 00:16:52.187 }, 00:16:52.187 "peer_address": { 00:16:52.187 "trtype": "TCP", 00:16:52.187 "adrfam": "IPv4", 00:16:52.187 "traddr": "10.0.0.1", 00:16:52.187 "trsvcid": "39088" 00:16:52.187 }, 00:16:52.187 "auth": { 00:16:52.187 "state": "completed", 00:16:52.187 "digest": "sha384", 00:16:52.187 "dhgroup": "ffdhe6144" 00:16:52.187 } 00:16:52.187 } 00:16:52.187 ]' 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.187 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.446 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.446 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.446 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.446 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:52.446 09:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.012 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.271 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.838 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.838 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.838 { 00:16:53.838 "cntlid": 83, 00:16:53.838 "qid": 0, 00:16:53.838 "state": "enabled", 00:16:53.838 "thread": "nvmf_tgt_poll_group_000", 00:16:53.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.838 "listen_address": { 00:16:53.838 "trtype": "TCP", 00:16:53.838 "adrfam": "IPv4", 00:16:53.838 "traddr": "10.0.0.2", 00:16:53.838 "trsvcid": "4420" 00:16:53.838 }, 00:16:53.838 "peer_address": { 00:16:53.838 "trtype": "TCP", 00:16:53.838 "adrfam": "IPv4", 00:16:53.838 "traddr": "10.0.0.1", 00:16:53.838 "trsvcid": "55072" 00:16:53.838 }, 00:16:53.838 "auth": { 00:16:53.838 "state": "completed", 00:16:53.838 "digest": "sha384", 00:16:53.839 "dhgroup": "ffdhe6144" 00:16:53.839 } 00:16:53.839 } 00:16:53.839 ]' 00:16:53.839 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.839 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.839 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.097 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.097 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.097 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.097 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.097 09:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.355 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:54.355 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.921 09:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.487 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.488 { 00:16:55.488 "cntlid": 85, 00:16:55.488 "qid": 0, 00:16:55.488 "state": "enabled", 00:16:55.488 "thread": "nvmf_tgt_poll_group_000", 00:16:55.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.488 "listen_address": { 00:16:55.488 "trtype": "TCP", 00:16:55.488 "adrfam": "IPv4", 00:16:55.488 "traddr": "10.0.0.2", 00:16:55.488 "trsvcid": "4420" 00:16:55.488 }, 00:16:55.488 "peer_address": { 00:16:55.488 "trtype": "TCP", 00:16:55.488 "adrfam": "IPv4", 00:16:55.488 "traddr": "10.0.0.1", 00:16:55.488 "trsvcid": "55096" 00:16:55.488 }, 00:16:55.488 "auth": { 00:16:55.488 "state": "completed", 00:16:55.488 "digest": "sha384", 00:16:55.488 "dhgroup": "ffdhe6144" 00:16:55.488 } 00:16:55.488 } 00:16:55.488 ]' 00:16:55.488 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.746 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.004 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:56.004 09:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:16:56.570 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.570 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.570 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.571 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.571 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.571 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.571 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.571 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.830 09:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.088 00:16:57.088 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.088 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.088 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.346 { 00:16:57.346 "cntlid": 87, 00:16:57.346 "qid": 0, 00:16:57.346 "state": "enabled", 00:16:57.346 "thread": "nvmf_tgt_poll_group_000", 00:16:57.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.346 "listen_address": { 00:16:57.346 "trtype": "TCP", 00:16:57.346 "adrfam": "IPv4", 00:16:57.346 "traddr": "10.0.0.2", 00:16:57.346 "trsvcid": "4420" 00:16:57.346 }, 00:16:57.346 "peer_address": { 00:16:57.346 "trtype": "TCP", 00:16:57.346 "adrfam": "IPv4", 00:16:57.346 "traddr": "10.0.0.1", 00:16:57.346 "trsvcid": "55128" 00:16:57.346 }, 00:16:57.346 "auth": { 00:16:57.346 "state": "completed", 00:16:57.346 "digest": "sha384", 00:16:57.346 "dhgroup": "ffdhe6144" 00:16:57.346 } 00:16:57.346 } 00:16:57.346 ]' 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.346 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.604 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:57.604 09:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.170 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.428 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.995 00:16:58.995 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.995 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.995 09:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.995 { 00:16:58.995 "cntlid": 89, 00:16:58.995 "qid": 0, 00:16:58.995 "state": "enabled", 00:16:58.995 "thread": "nvmf_tgt_poll_group_000", 00:16:58.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.995 "listen_address": { 00:16:58.995 "trtype": "TCP", 00:16:58.995 "adrfam": "IPv4", 00:16:58.995 "traddr": "10.0.0.2", 00:16:58.995 "trsvcid": "4420" 00:16:58.995 }, 00:16:58.995 "peer_address": { 00:16:58.995 "trtype": "TCP", 00:16:58.995 "adrfam": "IPv4", 00:16:58.995 "traddr": "10.0.0.1", 00:16:58.995 "trsvcid": "55152" 00:16:58.995 }, 00:16:58.995 "auth": { 00:16:58.995 "state": "completed", 00:16:58.995 "digest": "sha384", 00:16:58.995 "dhgroup": "ffdhe8192" 00:16:58.995 } 00:16:58.995 } 00:16:58.995 ]' 00:16:58.995 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.253 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.512 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:16:59.512 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.078 09:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.336 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.337 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.903 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.903 { 00:17:00.903 "cntlid": 91, 00:17:00.903 "qid": 0, 00:17:00.903 "state": "enabled", 00:17:00.903 "thread": "nvmf_tgt_poll_group_000", 00:17:00.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.903 "listen_address": { 00:17:00.903 "trtype": "TCP", 00:17:00.903 "adrfam": "IPv4", 00:17:00.903 "traddr": "10.0.0.2", 00:17:00.903 "trsvcid": "4420" 00:17:00.903 }, 00:17:00.903 "peer_address": { 00:17:00.903 "trtype": "TCP", 00:17:00.903 "adrfam": "IPv4", 00:17:00.903 "traddr": "10.0.0.1", 00:17:00.903 "trsvcid": "55182" 00:17:00.903 }, 00:17:00.903 "auth": { 00:17:00.903 "state": "completed", 00:17:00.903 "digest": "sha384", 00:17:00.903 "dhgroup": "ffdhe8192" 00:17:00.903 } 00:17:00.903 } 00:17:00.903 ]' 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.903 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.162 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.162 09:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.162 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.162 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.162 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.421 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:01.421 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:01.988 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.988 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.988 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.989 09:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.989 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.989 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.556 00:17:02.556 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.556 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.556 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.815 { 00:17:02.815 "cntlid": 93, 00:17:02.815 "qid": 0, 00:17:02.815 "state": "enabled", 00:17:02.815 "thread": "nvmf_tgt_poll_group_000", 00:17:02.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.815 "listen_address": { 00:17:02.815 "trtype": "TCP", 00:17:02.815 "adrfam": "IPv4", 00:17:02.815 "traddr": "10.0.0.2", 00:17:02.815 "trsvcid": "4420" 00:17:02.815 }, 00:17:02.815 "peer_address": { 00:17:02.815 "trtype": "TCP", 00:17:02.815 "adrfam": "IPv4", 00:17:02.815 "traddr": "10.0.0.1", 00:17:02.815 "trsvcid": "55214" 00:17:02.815 }, 00:17:02.815 "auth": { 00:17:02.815 "state": "completed", 00:17:02.815 "digest": "sha384", 00:17:02.815 "dhgroup": "ffdhe8192" 00:17:02.815 } 00:17:02.815 } 00:17:02.815 ]' 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.815 09:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.074 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:03.074 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.641 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.900 09:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.467 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.467 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.726 { 00:17:04.726 "cntlid": 95, 00:17:04.726 "qid": 0, 00:17:04.726 "state": "enabled", 00:17:04.726 "thread": "nvmf_tgt_poll_group_000", 00:17:04.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.726 "listen_address": { 00:17:04.726 "trtype": "TCP", 00:17:04.726 "adrfam": "IPv4", 00:17:04.726 "traddr": "10.0.0.2", 00:17:04.726 "trsvcid": "4420" 00:17:04.726 }, 00:17:04.726 "peer_address": { 00:17:04.726 "trtype": "TCP", 00:17:04.726 "adrfam": "IPv4", 00:17:04.726 "traddr": "10.0.0.1", 00:17:04.726 "trsvcid": "43042" 00:17:04.726 }, 00:17:04.726 "auth": { 00:17:04.726 "state": "completed", 00:17:04.726 "digest": "sha384", 00:17:04.726 "dhgroup": "ffdhe8192" 00:17:04.726 } 00:17:04.726 } 00:17:04.726 ]' 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.726 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.984 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:04.984 09:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:05.550 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.551 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.810 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.068 00:17:06.068 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.068 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.068 09:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.068 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.068 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.068 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.068 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.069 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.069 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.069 { 00:17:06.069 "cntlid": 97, 00:17:06.069 "qid": 0, 00:17:06.069 "state": "enabled", 00:17:06.069 "thread": "nvmf_tgt_poll_group_000", 00:17:06.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.069 "listen_address": { 00:17:06.069 "trtype": "TCP", 00:17:06.069 "adrfam": "IPv4", 00:17:06.069 "traddr": "10.0.0.2", 00:17:06.069 "trsvcid": "4420" 00:17:06.069 }, 00:17:06.069 "peer_address": { 00:17:06.069 "trtype": "TCP", 00:17:06.069 "adrfam": "IPv4", 00:17:06.069 "traddr": "10.0.0.1", 00:17:06.069 "trsvcid": "43066" 00:17:06.069 }, 00:17:06.069 "auth": { 00:17:06.069 "state": "completed", 00:17:06.069 "digest": "sha512", 00:17:06.069 "dhgroup": "null" 00:17:06.069 } 00:17:06.069 } 00:17:06.069 ]' 00:17:06.069 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.327 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.585 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:06.585 09:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.152 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.411 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.669 00:17:07.669 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.669 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.669 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.670 { 00:17:07.670 "cntlid": 99, 00:17:07.670 "qid": 0, 00:17:07.670 "state": "enabled", 00:17:07.670 "thread": "nvmf_tgt_poll_group_000", 00:17:07.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.670 "listen_address": { 00:17:07.670 "trtype": "TCP", 00:17:07.670 "adrfam": "IPv4", 00:17:07.670 "traddr": "10.0.0.2", 00:17:07.670 "trsvcid": "4420" 00:17:07.670 }, 00:17:07.670 "peer_address": { 00:17:07.670 "trtype": "TCP", 00:17:07.670 "adrfam": "IPv4", 00:17:07.670 "traddr": "10.0.0.1", 00:17:07.670 "trsvcid": "43096" 00:17:07.670 }, 00:17:07.670 "auth": { 00:17:07.670 "state": "completed", 00:17:07.670 "digest": "sha512", 00:17:07.670 "dhgroup": "null" 00:17:07.670 } 00:17:07.670 } 00:17:07.670 ]' 00:17:07.670 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.929 09:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.187 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:08.187 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:08.754 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.013 09:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.271 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.271 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.271 { 00:17:09.271 "cntlid": 101, 00:17:09.271 "qid": 0, 00:17:09.271 "state": "enabled", 00:17:09.271 "thread": "nvmf_tgt_poll_group_000", 00:17:09.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.271 "listen_address": { 00:17:09.271 "trtype": "TCP", 00:17:09.271 "adrfam": "IPv4", 00:17:09.271 "traddr": "10.0.0.2", 00:17:09.271 "trsvcid": "4420" 00:17:09.271 }, 00:17:09.271 "peer_address": { 00:17:09.271 "trtype": "TCP", 00:17:09.271 "adrfam": "IPv4", 00:17:09.271 "traddr": "10.0.0.1", 00:17:09.271 "trsvcid": "43122" 00:17:09.271 }, 00:17:09.271 "auth": { 00:17:09.271 "state": "completed", 00:17:09.272 "digest": "sha512", 00:17:09.272 "dhgroup": "null" 00:17:09.272 } 00:17:09.272 } 00:17:09.272 ]' 00:17:09.272 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.532 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.790 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:09.790 09:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:10.360 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.360 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.360 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.360 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.361 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.361 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.361 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.361 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.619 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:10.619 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.619 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.619 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.619 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.620 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.879 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.879 { 00:17:10.879 "cntlid": 103, 00:17:10.879 "qid": 0, 00:17:10.879 "state": "enabled", 00:17:10.879 "thread": "nvmf_tgt_poll_group_000", 00:17:10.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.879 "listen_address": { 00:17:10.879 "trtype": "TCP", 00:17:10.879 "adrfam": "IPv4", 00:17:10.879 "traddr": "10.0.0.2", 00:17:10.879 "trsvcid": "4420" 00:17:10.879 }, 00:17:10.879 "peer_address": { 00:17:10.879 "trtype": "TCP", 00:17:10.879 "adrfam": "IPv4", 00:17:10.879 "traddr": "10.0.0.1", 00:17:10.879 "trsvcid": "43148" 00:17:10.879 }, 00:17:10.879 "auth": { 00:17:10.879 "state": "completed", 00:17:10.879 "digest": "sha512", 00:17:10.879 "dhgroup": "null" 00:17:10.879 } 00:17:10.879 } 00:17:10.879 ]' 00:17:10.879 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.138 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.138 09:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.138 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.138 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.138 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.138 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.138 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.395 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:11.395 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:11.962 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.962 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.963 09:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.221 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.480 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.480 { 00:17:12.480 "cntlid": 105, 00:17:12.480 "qid": 0, 00:17:12.480 "state": "enabled", 00:17:12.480 "thread": "nvmf_tgt_poll_group_000", 00:17:12.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.480 "listen_address": { 00:17:12.480 "trtype": "TCP", 00:17:12.480 "adrfam": "IPv4", 00:17:12.480 "traddr": "10.0.0.2", 00:17:12.480 "trsvcid": "4420" 00:17:12.480 }, 00:17:12.480 "peer_address": { 00:17:12.480 "trtype": "TCP", 00:17:12.480 "adrfam": "IPv4", 00:17:12.480 "traddr": "10.0.0.1", 00:17:12.480 "trsvcid": "43170" 00:17:12.480 }, 00:17:12.480 "auth": { 00:17:12.480 "state": "completed", 00:17:12.480 "digest": "sha512", 00:17:12.480 "dhgroup": "ffdhe2048" 00:17:12.480 } 00:17:12.480 } 00:17:12.480 ]' 00:17:12.480 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.739 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.997 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:12.998 09:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.565 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.824 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.082 00:17:14.082 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.082 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.082 09:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.082 { 00:17:14.082 "cntlid": 107, 00:17:14.082 "qid": 0, 00:17:14.082 "state": "enabled", 00:17:14.082 "thread": "nvmf_tgt_poll_group_000", 00:17:14.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.082 "listen_address": { 00:17:14.082 "trtype": "TCP", 00:17:14.082 "adrfam": "IPv4", 00:17:14.082 "traddr": "10.0.0.2", 00:17:14.082 "trsvcid": "4420" 00:17:14.082 }, 00:17:14.082 "peer_address": { 00:17:14.082 "trtype": "TCP", 00:17:14.082 "adrfam": "IPv4", 00:17:14.082 "traddr": "10.0.0.1", 00:17:14.082 "trsvcid": "42186" 00:17:14.082 }, 00:17:14.082 "auth": { 00:17:14.082 "state": "completed", 00:17:14.082 "digest": "sha512", 00:17:14.082 "dhgroup": "ffdhe2048" 00:17:14.082 } 00:17:14.082 } 00:17:14.082 ]' 00:17:14.082 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.341 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.599 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:14.600 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:15.166 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.166 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.166 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.166 09:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.166 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.166 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.166 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.167 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.425 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.683 { 00:17:15.683 "cntlid": 109, 00:17:15.683 "qid": 0, 00:17:15.683 "state": "enabled", 00:17:15.683 "thread": "nvmf_tgt_poll_group_000", 00:17:15.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.683 "listen_address": { 00:17:15.683 "trtype": "TCP", 00:17:15.683 "adrfam": "IPv4", 00:17:15.683 "traddr": "10.0.0.2", 00:17:15.683 "trsvcid": "4420" 00:17:15.683 }, 00:17:15.683 "peer_address": { 00:17:15.683 "trtype": "TCP", 00:17:15.683 "adrfam": "IPv4", 00:17:15.683 "traddr": "10.0.0.1", 00:17:15.683 "trsvcid": "42208" 00:17:15.683 }, 00:17:15.683 "auth": { 00:17:15.683 "state": "completed", 00:17:15.683 "digest": "sha512", 00:17:15.683 "dhgroup": "ffdhe2048" 00:17:15.683 } 00:17:15.683 } 00:17:15.683 ]' 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.683 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.941 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.941 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.941 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.941 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.941 09:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.199 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:16.199 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.766 09:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.025 00:17:17.025 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.025 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.025 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.284 { 00:17:17.284 "cntlid": 111, 00:17:17.284 "qid": 0, 00:17:17.284 "state": "enabled", 00:17:17.284 "thread": "nvmf_tgt_poll_group_000", 00:17:17.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.284 "listen_address": { 00:17:17.284 "trtype": "TCP", 00:17:17.284 "adrfam": "IPv4", 00:17:17.284 "traddr": "10.0.0.2", 00:17:17.284 "trsvcid": "4420" 00:17:17.284 }, 00:17:17.284 "peer_address": { 00:17:17.284 "trtype": "TCP", 00:17:17.284 "adrfam": "IPv4", 00:17:17.284 "traddr": "10.0.0.1", 00:17:17.284 "trsvcid": "42230" 00:17:17.284 }, 00:17:17.284 "auth": { 00:17:17.284 "state": "completed", 00:17:17.284 "digest": "sha512", 00:17:17.284 "dhgroup": "ffdhe2048" 00:17:17.284 } 00:17:17.284 } 00:17:17.284 ]' 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.284 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.543 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.543 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.543 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.543 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.543 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.802 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:17.802 09:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.368 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.626 00:17:18.626 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.626 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.626 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.885 { 00:17:18.885 "cntlid": 113, 00:17:18.885 "qid": 0, 00:17:18.885 "state": "enabled", 00:17:18.885 "thread": "nvmf_tgt_poll_group_000", 00:17:18.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.885 "listen_address": { 00:17:18.885 "trtype": "TCP", 00:17:18.885 "adrfam": "IPv4", 00:17:18.885 "traddr": "10.0.0.2", 00:17:18.885 "trsvcid": "4420" 00:17:18.885 }, 00:17:18.885 "peer_address": { 00:17:18.885 "trtype": "TCP", 00:17:18.885 "adrfam": "IPv4", 00:17:18.885 "traddr": "10.0.0.1", 00:17:18.885 "trsvcid": "42252" 00:17:18.885 }, 00:17:18.885 "auth": { 00:17:18.885 "state": "completed", 00:17:18.885 "digest": "sha512", 00:17:18.885 "dhgroup": "ffdhe3072" 00:17:18.885 } 00:17:18.885 } 00:17:18.885 ]' 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.885 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.143 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.143 09:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.143 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.143 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.143 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.401 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:19.401 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.967 09:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.224 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:20.224 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.225 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.483 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.483 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.483 { 00:17:20.483 "cntlid": 115, 00:17:20.483 "qid": 0, 00:17:20.483 "state": "enabled", 00:17:20.483 "thread": "nvmf_tgt_poll_group_000", 00:17:20.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.483 "listen_address": { 00:17:20.483 "trtype": "TCP", 00:17:20.483 "adrfam": "IPv4", 00:17:20.483 "traddr": "10.0.0.2", 00:17:20.483 "trsvcid": "4420" 00:17:20.483 }, 00:17:20.483 "peer_address": { 00:17:20.483 "trtype": "TCP", 00:17:20.483 "adrfam": "IPv4", 00:17:20.483 "traddr": "10.0.0.1", 00:17:20.483 "trsvcid": "42270" 00:17:20.483 }, 00:17:20.483 "auth": { 00:17:20.483 "state": "completed", 00:17:20.483 "digest": "sha512", 00:17:20.483 "dhgroup": "ffdhe3072" 00:17:20.483 } 00:17:20.484 } 00:17:20.484 ]' 00:17:20.484 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.742 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.002 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:21.002 09:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.578 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.840 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.100 00:17:22.100 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.100 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.100 09:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.100 { 00:17:22.100 "cntlid": 117, 00:17:22.100 "qid": 0, 00:17:22.100 "state": "enabled", 00:17:22.100 "thread": "nvmf_tgt_poll_group_000", 00:17:22.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.100 "listen_address": { 00:17:22.100 "trtype": "TCP", 00:17:22.100 "adrfam": "IPv4", 00:17:22.100 "traddr": "10.0.0.2", 00:17:22.100 "trsvcid": "4420" 00:17:22.100 }, 00:17:22.100 "peer_address": { 00:17:22.100 "trtype": "TCP", 00:17:22.100 "adrfam": "IPv4", 00:17:22.100 "traddr": "10.0.0.1", 00:17:22.100 "trsvcid": "42302" 00:17:22.100 }, 00:17:22.100 "auth": { 00:17:22.100 "state": "completed", 00:17:22.100 "digest": "sha512", 00:17:22.100 "dhgroup": "ffdhe3072" 00:17:22.100 } 00:17:22.100 } 00:17:22.100 ]' 00:17:22.100 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.359 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.618 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:22.618 09:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.186 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.445 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.704 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.704 { 00:17:23.704 "cntlid": 119, 00:17:23.704 "qid": 0, 00:17:23.704 "state": "enabled", 00:17:23.704 "thread": "nvmf_tgt_poll_group_000", 00:17:23.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.704 "listen_address": { 00:17:23.704 "trtype": "TCP", 00:17:23.704 "adrfam": "IPv4", 00:17:23.704 "traddr": "10.0.0.2", 00:17:23.704 "trsvcid": "4420" 00:17:23.704 }, 00:17:23.704 "peer_address": { 00:17:23.704 "trtype": "TCP", 00:17:23.704 "adrfam": "IPv4", 00:17:23.704 "traddr": "10.0.0.1", 00:17:23.704 "trsvcid": "59224" 00:17:23.704 }, 00:17:23.704 "auth": { 00:17:23.704 "state": "completed", 00:17:23.704 "digest": "sha512", 00:17:23.704 "dhgroup": "ffdhe3072" 00:17:23.704 } 00:17:23.704 } 00:17:23.704 ]' 00:17:23.704 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.963 09:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.221 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:24.222 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.788 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.047 09:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.305 00:17:25.305 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.305 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.305 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.564 { 00:17:25.564 "cntlid": 121, 00:17:25.564 "qid": 0, 00:17:25.564 "state": "enabled", 00:17:25.564 "thread": "nvmf_tgt_poll_group_000", 00:17:25.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.564 "listen_address": { 00:17:25.564 "trtype": "TCP", 00:17:25.564 "adrfam": "IPv4", 00:17:25.564 "traddr": "10.0.0.2", 00:17:25.564 "trsvcid": "4420" 00:17:25.564 }, 00:17:25.564 "peer_address": { 00:17:25.564 "trtype": "TCP", 00:17:25.564 "adrfam": "IPv4", 00:17:25.564 "traddr": "10.0.0.1", 00:17:25.564 "trsvcid": "59248" 00:17:25.564 }, 00:17:25.564 "auth": { 00:17:25.564 "state": "completed", 00:17:25.564 "digest": "sha512", 00:17:25.564 "dhgroup": "ffdhe4096" 00:17:25.564 } 00:17:25.564 } 00:17:25.564 ]' 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.564 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.822 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:25.822 09:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.390 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.649 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.908 00:17:26.908 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.908 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.908 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.167 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.167 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.167 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.167 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.167 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.167 09:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.167 { 00:17:27.167 "cntlid": 123, 00:17:27.167 "qid": 0, 00:17:27.167 "state": "enabled", 00:17:27.167 "thread": "nvmf_tgt_poll_group_000", 00:17:27.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.167 "listen_address": { 00:17:27.167 "trtype": "TCP", 00:17:27.167 "adrfam": "IPv4", 00:17:27.167 "traddr": "10.0.0.2", 00:17:27.167 "trsvcid": "4420" 00:17:27.167 }, 00:17:27.167 "peer_address": { 00:17:27.167 "trtype": "TCP", 00:17:27.167 "adrfam": "IPv4", 00:17:27.167 "traddr": "10.0.0.1", 00:17:27.167 "trsvcid": "59264" 00:17:27.167 }, 00:17:27.167 "auth": { 00:17:27.167 "state": "completed", 00:17:27.167 "digest": "sha512", 00:17:27.167 "dhgroup": "ffdhe4096" 00:17:27.167 } 00:17:27.167 } 00:17:27.167 ]' 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.167 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.427 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:27.427 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.042 09:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.360 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.360 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.663 { 00:17:28.663 "cntlid": 125, 00:17:28.663 "qid": 0, 00:17:28.663 "state": "enabled", 00:17:28.663 "thread": "nvmf_tgt_poll_group_000", 00:17:28.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:28.663 "listen_address": { 00:17:28.663 "trtype": "TCP", 00:17:28.663 "adrfam": "IPv4", 00:17:28.663 "traddr": "10.0.0.2", 00:17:28.663 "trsvcid": "4420" 00:17:28.663 }, 00:17:28.663 "peer_address": { 00:17:28.663 "trtype": "TCP", 00:17:28.663 "adrfam": "IPv4", 00:17:28.663 "traddr": "10.0.0.1", 00:17:28.663 "trsvcid": "59298" 00:17:28.663 }, 00:17:28.663 "auth": { 00:17:28.663 "state": "completed", 00:17:28.663 "digest": "sha512", 00:17:28.663 "dhgroup": "ffdhe4096" 00:17:28.663 } 00:17:28.663 } 00:17:28.663 ]' 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.663 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.921 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.921 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.921 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.921 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:28.921 09:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.489 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.748 09:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.007 00:17:30.007 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.007 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.007 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.266 { 00:17:30.266 "cntlid": 127, 00:17:30.266 "qid": 0, 00:17:30.266 "state": "enabled", 00:17:30.266 "thread": "nvmf_tgt_poll_group_000", 00:17:30.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.266 "listen_address": { 00:17:30.266 "trtype": "TCP", 00:17:30.266 "adrfam": "IPv4", 00:17:30.266 "traddr": "10.0.0.2", 00:17:30.266 "trsvcid": "4420" 00:17:30.266 }, 00:17:30.266 "peer_address": { 00:17:30.266 "trtype": "TCP", 00:17:30.266 "adrfam": "IPv4", 00:17:30.266 "traddr": "10.0.0.1", 00:17:30.266 "trsvcid": "59320" 00:17:30.266 }, 00:17:30.266 "auth": { 00:17:30.266 "state": "completed", 00:17:30.266 "digest": "sha512", 00:17:30.266 "dhgroup": "ffdhe4096" 00:17:30.266 } 00:17:30.266 } 00:17:30.266 ]' 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.266 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.526 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.526 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.526 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.526 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:30.526 09:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.094 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.353 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.354 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.613 00:17:31.613 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.613 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.613 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.872 { 00:17:31.872 "cntlid": 129, 00:17:31.872 "qid": 0, 00:17:31.872 "state": "enabled", 00:17:31.872 "thread": "nvmf_tgt_poll_group_000", 00:17:31.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:31.872 "listen_address": { 00:17:31.872 "trtype": "TCP", 00:17:31.872 "adrfam": "IPv4", 00:17:31.872 "traddr": "10.0.0.2", 00:17:31.872 "trsvcid": "4420" 00:17:31.872 }, 00:17:31.872 "peer_address": { 00:17:31.872 "trtype": "TCP", 00:17:31.872 "adrfam": "IPv4", 00:17:31.872 "traddr": "10.0.0.1", 00:17:31.872 "trsvcid": "59350" 00:17:31.872 }, 00:17:31.872 "auth": { 00:17:31.872 "state": "completed", 00:17:31.872 "digest": "sha512", 00:17:31.872 "dhgroup": "ffdhe6144" 00:17:31.872 } 00:17:31.872 } 00:17:31.872 ]' 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.872 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.131 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.131 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.131 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.131 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.131 09:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.391 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:32.391 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.959 09:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.527 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.527 { 00:17:33.527 "cntlid": 131, 00:17:33.527 "qid": 0, 00:17:33.527 "state": "enabled", 00:17:33.527 "thread": "nvmf_tgt_poll_group_000", 00:17:33.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.527 "listen_address": { 00:17:33.527 "trtype": "TCP", 00:17:33.527 "adrfam": "IPv4", 00:17:33.527 "traddr": "10.0.0.2", 00:17:33.527 "trsvcid": "4420" 00:17:33.527 }, 00:17:33.527 "peer_address": { 00:17:33.527 "trtype": "TCP", 00:17:33.527 "adrfam": "IPv4", 00:17:33.527 "traddr": "10.0.0.1", 00:17:33.527 "trsvcid": "37258" 00:17:33.527 }, 00:17:33.527 "auth": { 00:17:33.527 "state": "completed", 00:17:33.527 "digest": "sha512", 00:17:33.527 "dhgroup": "ffdhe6144" 00:17:33.527 } 00:17:33.527 } 00:17:33.527 ]' 00:17:33.527 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.786 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.045 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:34.045 09:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.614 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.873 09:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.133 00:17:35.133 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.133 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.133 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.392 { 00:17:35.392 "cntlid": 133, 00:17:35.392 "qid": 0, 00:17:35.392 "state": "enabled", 00:17:35.392 "thread": "nvmf_tgt_poll_group_000", 00:17:35.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:35.392 "listen_address": { 00:17:35.392 "trtype": "TCP", 00:17:35.392 "adrfam": "IPv4", 00:17:35.392 "traddr": "10.0.0.2", 00:17:35.392 "trsvcid": "4420" 00:17:35.392 }, 00:17:35.392 "peer_address": { 00:17:35.392 "trtype": "TCP", 00:17:35.392 "adrfam": "IPv4", 00:17:35.392 "traddr": "10.0.0.1", 00:17:35.392 "trsvcid": "37296" 00:17:35.392 }, 00:17:35.392 "auth": { 00:17:35.392 "state": "completed", 00:17:35.392 "digest": "sha512", 00:17:35.392 "dhgroup": "ffdhe6144" 00:17:35.392 } 00:17:35.392 } 00:17:35.392 ]' 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.392 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.652 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:35.652 09:19:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.220 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.479 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.739 00:17:36.739 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.739 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.739 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.999 { 00:17:36.999 "cntlid": 135, 00:17:36.999 "qid": 0, 00:17:36.999 "state": "enabled", 00:17:36.999 "thread": "nvmf_tgt_poll_group_000", 00:17:36.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.999 "listen_address": { 00:17:36.999 "trtype": "TCP", 00:17:36.999 "adrfam": "IPv4", 00:17:36.999 "traddr": "10.0.0.2", 00:17:36.999 "trsvcid": "4420" 00:17:36.999 }, 00:17:36.999 "peer_address": { 00:17:36.999 "trtype": "TCP", 00:17:36.999 "adrfam": "IPv4", 00:17:36.999 "traddr": "10.0.0.1", 00:17:36.999 "trsvcid": "37320" 00:17:36.999 }, 00:17:36.999 "auth": { 00:17:36.999 "state": "completed", 00:17:36.999 "digest": "sha512", 00:17:36.999 "dhgroup": "ffdhe6144" 00:17:36.999 } 00:17:36.999 } 00:17:36.999 ]' 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.999 09:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.999 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.999 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.999 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.999 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.999 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.259 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:37.259 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.827 09:19:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.086 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.653 00:17:38.653 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.653 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.653 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.911 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.911 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.911 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.911 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.911 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.911 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.911 { 00:17:38.911 "cntlid": 137, 00:17:38.911 "qid": 0, 00:17:38.911 "state": "enabled", 00:17:38.911 "thread": "nvmf_tgt_poll_group_000", 00:17:38.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:38.911 "listen_address": { 00:17:38.911 "trtype": "TCP", 00:17:38.911 "adrfam": "IPv4", 00:17:38.911 "traddr": "10.0.0.2", 00:17:38.912 "trsvcid": "4420" 00:17:38.912 }, 00:17:38.912 "peer_address": { 00:17:38.912 "trtype": "TCP", 00:17:38.912 "adrfam": "IPv4", 00:17:38.912 "traddr": "10.0.0.1", 00:17:38.912 "trsvcid": "37348" 00:17:38.912 }, 00:17:38.912 "auth": { 00:17:38.912 "state": "completed", 00:17:38.912 "digest": "sha512", 00:17:38.912 "dhgroup": "ffdhe8192" 00:17:38.912 } 00:17:38.912 } 00:17:38.912 ]' 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.912 09:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.171 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:39.171 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.739 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.999 09:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.568 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.568 { 00:17:40.568 "cntlid": 139, 00:17:40.568 "qid": 0, 00:17:40.568 "state": "enabled", 00:17:40.568 "thread": "nvmf_tgt_poll_group_000", 00:17:40.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:40.568 "listen_address": { 00:17:40.568 "trtype": "TCP", 00:17:40.568 "adrfam": "IPv4", 00:17:40.568 "traddr": "10.0.0.2", 00:17:40.568 "trsvcid": "4420" 00:17:40.568 }, 00:17:40.568 "peer_address": { 00:17:40.568 "trtype": "TCP", 00:17:40.568 "adrfam": "IPv4", 00:17:40.568 "traddr": "10.0.0.1", 00:17:40.568 "trsvcid": "37378" 00:17:40.568 }, 00:17:40.568 "auth": { 00:17:40.568 "state": "completed", 00:17:40.568 "digest": "sha512", 00:17:40.568 "dhgroup": "ffdhe8192" 00:17:40.568 } 00:17:40.568 } 00:17:40.568 ]' 00:17:40.568 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.827 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.086 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:41.086 09:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: --dhchap-ctrl-secret DHHC-1:02:ODVjNjFkZGU5NmZmN2IyZjQ3MTk4MTJkNjg5NWY3NGQ0ZGMxMjBkMTMxMTA5MWY3sBSCOQ==: 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.655 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.914 09:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.173 00:17:42.173 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.173 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.173 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.432 { 00:17:42.432 "cntlid": 141, 00:17:42.432 "qid": 0, 00:17:42.432 "state": "enabled", 00:17:42.432 "thread": "nvmf_tgt_poll_group_000", 00:17:42.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.432 "listen_address": { 00:17:42.432 "trtype": "TCP", 00:17:42.432 "adrfam": "IPv4", 00:17:42.432 "traddr": "10.0.0.2", 00:17:42.432 "trsvcid": "4420" 00:17:42.432 }, 00:17:42.432 "peer_address": { 00:17:42.432 "trtype": "TCP", 00:17:42.432 "adrfam": "IPv4", 00:17:42.432 "traddr": "10.0.0.1", 00:17:42.432 "trsvcid": "37394" 00:17:42.432 }, 00:17:42.432 "auth": { 00:17:42.432 "state": "completed", 00:17:42.432 "digest": "sha512", 00:17:42.432 "dhgroup": "ffdhe8192" 00:17:42.432 } 00:17:42.432 } 00:17:42.432 ]' 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.432 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.691 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.691 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.691 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.691 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.691 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.951 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:42.951 09:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:01:ZjA2NGQ4YWRhNGJhMTE3Y2FmOWNmMzAzZjQ0MWVmODVAhd/N: 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.519 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.778 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.778 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.778 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.778 09:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.036 00:17:44.036 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.036 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.036 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.295 { 00:17:44.295 "cntlid": 143, 00:17:44.295 "qid": 0, 00:17:44.295 "state": "enabled", 00:17:44.295 "thread": "nvmf_tgt_poll_group_000", 00:17:44.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.295 "listen_address": { 00:17:44.295 "trtype": "TCP", 00:17:44.295 "adrfam": "IPv4", 00:17:44.295 "traddr": "10.0.0.2", 00:17:44.295 "trsvcid": "4420" 00:17:44.295 }, 00:17:44.295 "peer_address": { 00:17:44.295 "trtype": "TCP", 00:17:44.295 "adrfam": "IPv4", 00:17:44.295 "traddr": "10.0.0.1", 00:17:44.295 "trsvcid": "45822" 00:17:44.295 }, 00:17:44.295 "auth": { 00:17:44.295 "state": "completed", 00:17:44.295 "digest": "sha512", 00:17:44.295 "dhgroup": "ffdhe8192" 00:17:44.295 } 00:17:44.295 } 00:17:44.295 ]' 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.295 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.554 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.554 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.554 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.554 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.554 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.812 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:44.812 09:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:45.379 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.379 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.379 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.379 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.379 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.380 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.947 00:17:45.947 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.947 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.947 09:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.206 { 00:17:46.206 "cntlid": 145, 00:17:46.206 "qid": 0, 00:17:46.206 "state": "enabled", 00:17:46.206 "thread": "nvmf_tgt_poll_group_000", 00:17:46.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.206 "listen_address": { 00:17:46.206 "trtype": "TCP", 00:17:46.206 "adrfam": "IPv4", 00:17:46.206 "traddr": "10.0.0.2", 00:17:46.206 "trsvcid": "4420" 00:17:46.206 }, 00:17:46.206 "peer_address": { 00:17:46.206 "trtype": "TCP", 00:17:46.206 "adrfam": "IPv4", 00:17:46.206 "traddr": "10.0.0.1", 00:17:46.206 "trsvcid": "45846" 00:17:46.206 }, 00:17:46.206 "auth": { 00:17:46.206 "state": "completed", 00:17:46.206 "digest": "sha512", 00:17:46.206 "dhgroup": "ffdhe8192" 00:17:46.206 } 00:17:46.206 } 00:17:46.206 ]' 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.206 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.465 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:46.465 09:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:N2FjOTk0MDI3YzhlYjdmODNmOTIxN2NkOTRhN2JmMmFmMmJlZDU3ZjkyMjU3MzY1Z1q5OA==: --dhchap-ctrl-secret DHHC-1:03:ODQxMjIzMTg5NmEyYjU4ZWVmNjYwZTRhY2RlNDUyYmVmZjA1ZTk1M2MzOGIzZjExNjZjNzI3YjkzYTlhZWYxNKAgQP4=: 00:17:47.032 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.032 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.033 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:47.600 request: 00:17:47.600 { 00:17:47.600 "name": "nvme0", 00:17:47.600 "trtype": "tcp", 00:17:47.600 "traddr": "10.0.0.2", 00:17:47.600 "adrfam": "ipv4", 00:17:47.600 "trsvcid": "4420", 00:17:47.600 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.600 "prchk_reftag": false, 00:17:47.600 "prchk_guard": false, 00:17:47.600 "hdgst": false, 00:17:47.600 "ddgst": false, 00:17:47.600 "dhchap_key": "key2", 00:17:47.600 "allow_unrecognized_csi": false, 00:17:47.600 "method": "bdev_nvme_attach_controller", 00:17:47.600 "req_id": 1 00:17:47.600 } 00:17:47.600 Got JSON-RPC error response 00:17:47.600 response: 00:17:47.600 { 00:17:47.600 "code": -5, 00:17:47.600 "message": "Input/output error" 00:17:47.600 } 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.600 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:47.601 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.168 request: 00:17:48.168 { 00:17:48.168 "name": "nvme0", 00:17:48.168 "trtype": "tcp", 00:17:48.168 "traddr": "10.0.0.2", 00:17:48.168 "adrfam": "ipv4", 00:17:48.168 "trsvcid": "4420", 00:17:48.168 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:48.168 "prchk_reftag": false, 00:17:48.168 "prchk_guard": false, 00:17:48.168 "hdgst": false, 00:17:48.168 "ddgst": false, 00:17:48.168 "dhchap_key": "key1", 00:17:48.168 "dhchap_ctrlr_key": "ckey2", 00:17:48.168 "allow_unrecognized_csi": false, 00:17:48.168 "method": "bdev_nvme_attach_controller", 00:17:48.168 "req_id": 1 00:17:48.168 } 00:17:48.168 Got JSON-RPC error response 00:17:48.168 response: 00:17:48.168 { 00:17:48.168 "code": -5, 00:17:48.168 "message": "Input/output error" 00:17:48.168 } 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.168 09:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.168 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.426 request: 00:17:48.426 { 00:17:48.426 "name": "nvme0", 00:17:48.426 "trtype": "tcp", 00:17:48.426 "traddr": "10.0.0.2", 00:17:48.426 "adrfam": "ipv4", 00:17:48.426 "trsvcid": "4420", 00:17:48.426 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:48.426 "prchk_reftag": false, 00:17:48.426 "prchk_guard": false, 00:17:48.426 "hdgst": false, 00:17:48.426 "ddgst": false, 00:17:48.426 "dhchap_key": "key1", 00:17:48.426 "dhchap_ctrlr_key": "ckey1", 00:17:48.426 "allow_unrecognized_csi": false, 00:17:48.426 "method": "bdev_nvme_attach_controller", 00:17:48.426 "req_id": 1 00:17:48.426 } 00:17:48.427 Got JSON-RPC error response 00:17:48.427 response: 00:17:48.427 { 00:17:48.427 "code": -5, 00:17:48.427 "message": "Input/output error" 00:17:48.427 } 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.427 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1092911 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1092911 ']' 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1092911 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1092911 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1092911' 00:17:48.685 killing process with pid 1092911 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1092911 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1092911 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.685 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1115155 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1115155 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1115155 ']' 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:48.686 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1115155 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1115155 ']' 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:48.945 09:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.203 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:49.203 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:49.204 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:49.204 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.204 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.204 null0 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Doo 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.YZV ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YZV 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2Ls 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BsQ ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BsQ 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Pii 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.awq ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.awq 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.H66 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.463 09:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.400 nvme0n1 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.400 { 00:17:50.400 "cntlid": 1, 00:17:50.400 "qid": 0, 00:17:50.400 "state": "enabled", 00:17:50.400 "thread": "nvmf_tgt_poll_group_000", 00:17:50.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.400 "listen_address": { 00:17:50.400 "trtype": "TCP", 00:17:50.400 "adrfam": "IPv4", 00:17:50.400 "traddr": "10.0.0.2", 00:17:50.400 "trsvcid": "4420" 00:17:50.400 }, 00:17:50.400 "peer_address": { 00:17:50.400 "trtype": "TCP", 00:17:50.400 "adrfam": "IPv4", 00:17:50.400 "traddr": "10.0.0.1", 00:17:50.400 "trsvcid": "45906" 00:17:50.400 }, 00:17:50.400 "auth": { 00:17:50.400 "state": "completed", 00:17:50.400 "digest": "sha512", 00:17:50.400 "dhgroup": "ffdhe8192" 00:17:50.400 } 00:17:50.400 } 00:17:50.400 ]' 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.400 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.657 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.657 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.657 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:50.657 09:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.222 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.481 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.481 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:51.481 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:51.481 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:51.481 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:51.481 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.482 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.740 request: 00:17:51.740 { 00:17:51.740 "name": "nvme0", 00:17:51.740 "trtype": "tcp", 00:17:51.740 "traddr": "10.0.0.2", 00:17:51.740 "adrfam": "ipv4", 00:17:51.740 "trsvcid": "4420", 00:17:51.740 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:51.740 "prchk_reftag": false, 00:17:51.740 "prchk_guard": false, 00:17:51.740 "hdgst": false, 00:17:51.740 "ddgst": false, 00:17:51.740 "dhchap_key": "key3", 00:17:51.740 "allow_unrecognized_csi": false, 00:17:51.740 "method": "bdev_nvme_attach_controller", 00:17:51.740 "req_id": 1 00:17:51.740 } 00:17:51.740 Got JSON-RPC error response 00:17:51.740 response: 00:17:51.740 { 00:17:51.740 "code": -5, 00:17:51.741 "message": "Input/output error" 00:17:51.741 } 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:51.741 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.999 09:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.258 request: 00:17:52.258 { 00:17:52.258 "name": "nvme0", 00:17:52.258 "trtype": "tcp", 00:17:52.258 "traddr": "10.0.0.2", 00:17:52.258 "adrfam": "ipv4", 00:17:52.258 "trsvcid": "4420", 00:17:52.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.258 "prchk_reftag": false, 00:17:52.258 "prchk_guard": false, 00:17:52.258 "hdgst": false, 00:17:52.258 "ddgst": false, 00:17:52.258 "dhchap_key": "key3", 00:17:52.258 "allow_unrecognized_csi": false, 00:17:52.258 "method": "bdev_nvme_attach_controller", 00:17:52.258 "req_id": 1 00:17:52.258 } 00:17:52.258 Got JSON-RPC error response 00:17:52.258 response: 00:17:52.258 { 00:17:52.258 "code": -5, 00:17:52.258 "message": "Input/output error" 00:17:52.258 } 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.258 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.517 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:52.775 request: 00:17:52.775 { 00:17:52.775 "name": "nvme0", 00:17:52.775 "trtype": "tcp", 00:17:52.775 "traddr": "10.0.0.2", 00:17:52.775 "adrfam": "ipv4", 00:17:52.775 "trsvcid": "4420", 00:17:52.775 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.775 "prchk_reftag": false, 00:17:52.775 "prchk_guard": false, 00:17:52.775 "hdgst": false, 00:17:52.775 "ddgst": false, 00:17:52.775 "dhchap_key": "key0", 00:17:52.775 "dhchap_ctrlr_key": "key1", 00:17:52.775 "allow_unrecognized_csi": false, 00:17:52.775 "method": "bdev_nvme_attach_controller", 00:17:52.775 "req_id": 1 00:17:52.775 } 00:17:52.775 Got JSON-RPC error response 00:17:52.775 response: 00:17:52.775 { 00:17:52.775 "code": -5, 00:17:52.775 "message": "Input/output error" 00:17:52.775 } 00:17:52.775 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:52.775 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.775 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.775 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.775 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:52.775 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:52.776 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:53.033 nvme0n1 00:17:53.033 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:53.033 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.033 09:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:53.292 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.292 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.292 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:53.550 09:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.117 nvme0n1 00:17:54.117 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:54.117 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:54.117 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.376 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:54.635 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.635 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:54.635 09:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: --dhchap-ctrl-secret DHHC-1:03:Y2E5N2FmNmZlYmMyZWQwNDBmMjQxNTY0MWZmMjY0MzI1ZWNjMzE1NDkwYWZmMzNkMmExNzM3YzViNjk0MTQwZY+lEak=: 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.202 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:55.460 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:56.026 request: 00:17:56.026 { 00:17:56.026 "name": "nvme0", 00:17:56.026 "trtype": "tcp", 00:17:56.026 "traddr": "10.0.0.2", 00:17:56.026 "adrfam": "ipv4", 00:17:56.026 "trsvcid": "4420", 00:17:56.026 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:56.026 "prchk_reftag": false, 00:17:56.026 "prchk_guard": false, 00:17:56.026 "hdgst": false, 00:17:56.026 "ddgst": false, 00:17:56.026 "dhchap_key": "key1", 00:17:56.026 "allow_unrecognized_csi": false, 00:17:56.026 "method": "bdev_nvme_attach_controller", 00:17:56.026 "req_id": 1 00:17:56.026 } 00:17:56.026 Got JSON-RPC error response 00:17:56.026 response: 00:17:56.026 { 00:17:56.026 "code": -5, 00:17:56.026 "message": "Input/output error" 00:17:56.026 } 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.026 09:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.592 nvme0n1 00:17:56.592 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:56.592 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:56.592 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.850 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.850 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.850 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.108 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.108 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.108 09:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.108 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.108 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:57.108 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:57.108 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:57.366 nvme0n1 00:17:57.366 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:57.366 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:57.366 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: '' 2s 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: ]] 00:17:57.628 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTk3NjJhOTYzN2MyNzk4NGYyZGQxMDE3MzAzNjA2NjNbfd30: 00:17:57.887 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:57.887 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:57.887 09:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: 2s 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:59.789 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: ]] 00:17:59.790 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTc1NDc1MGRlMTFlYzQxNjdmYjg4YmU5OTdkOTk2ZjhkM2FkMjAzODAzMjk2OWM0sxmqPQ==: 00:17:59.790 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:59.790 09:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:01.692 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:01.692 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:01.692 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:01.692 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.950 09:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.517 nvme0n1 00:18:02.517 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.517 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.517 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.775 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.775 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.775 09:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.034 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:03.034 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:03.034 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:03.293 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:03.552 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:03.552 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:03.552 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:03.810 09:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.070 request: 00:18:04.070 { 00:18:04.070 "name": "nvme0", 00:18:04.070 "dhchap_key": "key1", 00:18:04.070 "dhchap_ctrlr_key": "key3", 00:18:04.070 "method": "bdev_nvme_set_keys", 00:18:04.070 "req_id": 1 00:18:04.070 } 00:18:04.070 Got JSON-RPC error response 00:18:04.070 response: 00:18:04.070 { 00:18:04.070 "code": -13, 00:18:04.070 "message": "Permission denied" 00:18:04.070 } 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:04.329 09:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:05.705 09:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.272 nvme0n1 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.272 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:06.840 request: 00:18:06.840 { 00:18:06.840 "name": "nvme0", 00:18:06.840 "dhchap_key": "key2", 00:18:06.840 "dhchap_ctrlr_key": "key0", 00:18:06.840 "method": "bdev_nvme_set_keys", 00:18:06.840 "req_id": 1 00:18:06.840 } 00:18:06.840 Got JSON-RPC error response 00:18:06.840 response: 00:18:06.840 { 00:18:06.840 "code": -13, 00:18:06.840 "message": "Permission denied" 00:18:06.840 } 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:06.840 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.099 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:07.099 09:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:08.034 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:08.034 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:08.034 09:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1092933 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1092933 ']' 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1092933 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1092933 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1092933' 00:18:08.293 killing process with pid 1092933 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1092933 00:18:08.293 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1092933 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:08.551 rmmod nvme_tcp 00:18:08.551 rmmod nvme_fabrics 00:18:08.551 rmmod nvme_keyring 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1115155 ']' 00:18:08.551 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1115155 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1115155 ']' 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1115155 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1115155 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1115155' 00:18:08.810 killing process with pid 1115155 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1115155 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1115155 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.810 09:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Doo /tmp/spdk.key-sha256.2Ls /tmp/spdk.key-sha384.Pii /tmp/spdk.key-sha512.H66 /tmp/spdk.key-sha512.YZV /tmp/spdk.key-sha384.BsQ /tmp/spdk.key-sha256.awq '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:11.347 00:18:11.347 real 2m33.914s 00:18:11.347 user 5m55.275s 00:18:11.347 sys 0m24.319s 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.347 ************************************ 00:18:11.347 END TEST nvmf_auth_target 00:18:11.347 ************************************ 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.347 ************************************ 00:18:11.347 START TEST nvmf_bdevio_no_huge 00:18:11.347 ************************************ 00:18:11.347 09:20:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.347 * Looking for test storage... 00:18:11.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.347 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.348 --rc genhtml_branch_coverage=1 00:18:11.348 --rc genhtml_function_coverage=1 00:18:11.348 --rc genhtml_legend=1 00:18:11.348 --rc geninfo_all_blocks=1 00:18:11.348 --rc geninfo_unexecuted_blocks=1 00:18:11.348 00:18:11.348 ' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.348 --rc genhtml_branch_coverage=1 00:18:11.348 --rc genhtml_function_coverage=1 00:18:11.348 --rc genhtml_legend=1 00:18:11.348 --rc geninfo_all_blocks=1 00:18:11.348 --rc geninfo_unexecuted_blocks=1 00:18:11.348 00:18:11.348 ' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.348 --rc genhtml_branch_coverage=1 00:18:11.348 --rc genhtml_function_coverage=1 00:18:11.348 --rc genhtml_legend=1 00:18:11.348 --rc geninfo_all_blocks=1 00:18:11.348 --rc geninfo_unexecuted_blocks=1 00:18:11.348 00:18:11.348 ' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.348 --rc genhtml_branch_coverage=1 00:18:11.348 --rc genhtml_function_coverage=1 00:18:11.348 --rc genhtml_legend=1 00:18:11.348 --rc geninfo_all_blocks=1 00:18:11.348 --rc geninfo_unexecuted_blocks=1 00:18:11.348 00:18:11.348 ' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.348 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.349 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.349 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.349 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:11.349 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:11.349 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:11.349 09:20:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:17.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:17.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:17.920 Found net devices under 0000:86:00.0: cvl_0_0 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:17.920 Found net devices under 0000:86:00.1: cvl_0_1 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:17.920 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:17.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:18:17.920 00:18:17.920 --- 10.0.0.2 ping statistics --- 00:18:17.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.921 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:18:17.921 00:18:17.921 --- 10.0.0.1 ping statistics --- 00:18:17.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.921 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.921 09:20:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1122032 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1122032 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 1122032 ']' 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 [2024-11-19 09:20:18.077408] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:17.921 [2024-11-19 09:20:18.077461] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:17.921 [2024-11-19 09:20:18.163432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.921 [2024-11-19 09:20:18.209042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.921 [2024-11-19 09:20:18.209077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.921 [2024-11-19 09:20:18.209084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.921 [2024-11-19 09:20:18.209090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.921 [2024-11-19 09:20:18.209094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.921 [2024-11-19 09:20:18.210331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:17.921 [2024-11-19 09:20:18.210446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:17.921 [2024-11-19 09:20:18.210554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.921 [2024-11-19 09:20:18.210556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.921 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 [2024-11-19 09:20:18.969797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 Malloc0 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 09:20:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 [2024-11-19 09:20:19.014093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:18.180 { 00:18:18.180 "params": { 00:18:18.180 "name": "Nvme$subsystem", 00:18:18.180 "trtype": "$TEST_TRANSPORT", 00:18:18.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:18.180 "adrfam": "ipv4", 00:18:18.180 "trsvcid": "$NVMF_PORT", 00:18:18.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:18.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:18.180 "hdgst": ${hdgst:-false}, 00:18:18.180 "ddgst": ${ddgst:-false} 00:18:18.180 }, 00:18:18.180 "method": "bdev_nvme_attach_controller" 00:18:18.180 } 00:18:18.180 EOF 00:18:18.180 )") 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:18.180 09:20:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:18.180 "params": { 00:18:18.180 "name": "Nvme1", 00:18:18.180 "trtype": "tcp", 00:18:18.180 "traddr": "10.0.0.2", 00:18:18.180 "adrfam": "ipv4", 00:18:18.180 "trsvcid": "4420", 00:18:18.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.180 "hdgst": false, 00:18:18.180 "ddgst": false 00:18:18.180 }, 00:18:18.180 "method": "bdev_nvme_attach_controller" 00:18:18.180 }' 00:18:18.180 [2024-11-19 09:20:19.064060] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:18.180 [2024-11-19 09:20:19.064106] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1122281 ] 00:18:18.180 [2024-11-19 09:20:19.144284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:18.180 [2024-11-19 09:20:19.192963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.180 [2024-11-19 09:20:19.193053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.180 [2024-11-19 09:20:19.193054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.437 I/O targets: 00:18:18.437 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:18.437 00:18:18.437 00:18:18.437 CUnit - A unit testing framework for C - Version 2.1-3 00:18:18.437 http://cunit.sourceforge.net/ 00:18:18.437 00:18:18.437 00:18:18.437 Suite: bdevio tests on: Nvme1n1 00:18:18.695 Test: blockdev write read block ...passed 00:18:18.695 Test: blockdev write zeroes read block ...passed 00:18:18.695 Test: blockdev write zeroes read no split ...passed 00:18:18.695 Test: blockdev write zeroes read split ...passed 00:18:18.695 Test: blockdev write zeroes read split partial ...passed 00:18:18.695 Test: blockdev reset ...[2024-11-19 09:20:19.688436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:18.695 [2024-11-19 09:20:19.688501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1735920 (9): Bad file descriptor 00:18:18.695 [2024-11-19 09:20:19.742556] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:18.695 passed 00:18:18.952 Test: blockdev write read 8 blocks ...passed 00:18:18.952 Test: blockdev write read size > 128k ...passed 00:18:18.952 Test: blockdev write read invalid size ...passed 00:18:18.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:18.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:18.952 Test: blockdev write read max offset ...passed 00:18:18.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:18.952 Test: blockdev writev readv 8 blocks ...passed 00:18:18.952 Test: blockdev writev readv 30 x 1block ...passed 00:18:18.952 Test: blockdev writev readv block ...passed 00:18:18.952 Test: blockdev writev readv size > 128k ...passed 00:18:18.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:18.952 Test: blockdev comparev and writev ...[2024-11-19 09:20:19.994712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.952 [2024-11-19 09:20:19.994742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:18.952 [2024-11-19 09:20:19.994757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.952 [2024-11-19 09:20:19.994766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:18.952 [2024-11-19 09:20:19.995012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.952 [2024-11-19 09:20:19.995024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:18.953 [2024-11-19 09:20:19.995035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.953 [2024-11-19 09:20:19.995043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:18.953 [2024-11-19 09:20:19.995301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.953 [2024-11-19 09:20:19.995312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:18.953 [2024-11-19 09:20:19.995324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.953 [2024-11-19 09:20:19.995331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:18.953 [2024-11-19 09:20:19.995559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.953 [2024-11-19 09:20:19.995570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:18.953 [2024-11-19 09:20:19.995582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:18.953 [2024-11-19 09:20:19.995589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:19.211 passed 00:18:19.211 Test: blockdev nvme passthru rw ...passed 00:18:19.211 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:20:20.077303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.211 [2024-11-19 09:20:20.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:19.211 [2024-11-19 09:20:20.077449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.211 [2024-11-19 09:20:20.077460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:19.211 [2024-11-19 09:20:20.077565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.211 [2024-11-19 09:20:20.077574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:19.211 [2024-11-19 09:20:20.077682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.211 [2024-11-19 09:20:20.077693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:19.211 passed 00:18:19.211 Test: blockdev nvme admin passthru ...passed 00:18:19.211 Test: blockdev copy ...passed 00:18:19.211 00:18:19.211 Run Summary: Type Total Ran Passed Failed Inactive 00:18:19.211 suites 1 1 n/a 0 0 00:18:19.211 tests 23 23 23 0 0 00:18:19.211 asserts 152 152 152 0 n/a 00:18:19.211 00:18:19.211 Elapsed time = 1.302 seconds 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.470 rmmod nvme_tcp 00:18:19.470 rmmod nvme_fabrics 00:18:19.470 rmmod nvme_keyring 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1122032 ']' 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1122032 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 1122032 ']' 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 1122032 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:19.470 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1122032 00:18:19.729 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:18:19.729 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:18:19.729 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1122032' 00:18:19.729 killing process with pid 1122032 00:18:19.729 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 1122032 00:18:19.729 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 1122032 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.988 09:20:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:21.894 00:18:21.894 real 0m10.933s 00:18:21.894 user 0m14.634s 00:18:21.894 sys 0m5.306s 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.894 ************************************ 00:18:21.894 END TEST nvmf_bdevio_no_huge 00:18:21.894 ************************************ 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:21.894 09:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:22.153 ************************************ 00:18:22.153 START TEST nvmf_tls 00:18:22.153 ************************************ 00:18:22.153 09:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:22.153 * Looking for test storage... 00:18:22.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.153 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.154 --rc genhtml_branch_coverage=1 00:18:22.154 --rc genhtml_function_coverage=1 00:18:22.154 --rc genhtml_legend=1 00:18:22.154 --rc geninfo_all_blocks=1 00:18:22.154 --rc geninfo_unexecuted_blocks=1 00:18:22.154 00:18:22.154 ' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.154 --rc genhtml_branch_coverage=1 00:18:22.154 --rc genhtml_function_coverage=1 00:18:22.154 --rc genhtml_legend=1 00:18:22.154 --rc geninfo_all_blocks=1 00:18:22.154 --rc geninfo_unexecuted_blocks=1 00:18:22.154 00:18:22.154 ' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.154 --rc genhtml_branch_coverage=1 00:18:22.154 --rc genhtml_function_coverage=1 00:18:22.154 --rc genhtml_legend=1 00:18:22.154 --rc geninfo_all_blocks=1 00:18:22.154 --rc geninfo_unexecuted_blocks=1 00:18:22.154 00:18:22.154 ' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.154 --rc genhtml_branch_coverage=1 00:18:22.154 --rc genhtml_function_coverage=1 00:18:22.154 --rc genhtml_legend=1 00:18:22.154 --rc geninfo_all_blocks=1 00:18:22.154 --rc geninfo_unexecuted_blocks=1 00:18:22.154 00:18:22.154 ' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:22.154 09:20:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:28.727 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:28.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:28.728 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:28.728 Found net devices under 0000:86:00.0: cvl_0_0 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:28.728 Found net devices under 0000:86:00.1: cvl_0_1 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.728 09:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:18:28.728 00:18:28.728 --- 10.0.0.2 ping statistics --- 00:18:28.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.728 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:18:28.728 00:18:28.728 --- 10.0.0.1 ping statistics --- 00:18:28.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.728 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1126044 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1126044 00:18:28.728 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1126044 ']' 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.729 [2024-11-19 09:20:29.255962] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:28.729 [2024-11-19 09:20:29.256014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.729 [2024-11-19 09:20:29.335934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.729 [2024-11-19 09:20:29.377879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.729 [2024-11-19 09:20:29.377915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.729 [2024-11-19 09:20:29.377922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.729 [2024-11-19 09:20:29.377928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.729 [2024-11-19 09:20:29.377933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.729 [2024-11-19 09:20:29.378490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:28.729 true 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:28.729 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:28.988 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:28.988 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:28.988 09:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:29.247 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.247 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:29.247 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:29.247 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:29.247 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:29.506 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.506 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:29.764 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:29.764 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:29.764 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:29.765 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:30.024 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:30.024 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:30.024 09:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:30.024 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:30.024 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:30.284 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:30.284 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:30.284 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:30.542 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:30.543 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.uCKRBSuG5l 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.4EwbCTO8rp 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.uCKRBSuG5l 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.4EwbCTO8rp 00:18:30.802 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:31.061 09:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:31.318 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.uCKRBSuG5l 00:18:31.319 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.uCKRBSuG5l 00:18:31.319 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.319 [2024-11-19 09:20:32.325401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.319 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.576 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.835 [2024-11-19 09:20:32.706378] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.835 [2024-11-19 09:20:32.706603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.835 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.094 malloc0 00:18:32.094 09:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.094 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.uCKRBSuG5l 00:18:32.353 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.612 09:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uCKRBSuG5l 00:18:42.728 Initializing NVMe Controllers 00:18:42.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:42.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:42.728 Initialization complete. Launching workers. 00:18:42.728 ======================================================== 00:18:42.728 Latency(us) 00:18:42.728 Device Information : IOPS MiB/s Average min max 00:18:42.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16260.82 63.52 3935.96 807.59 4908.78 00:18:42.728 ======================================================== 00:18:42.728 Total : 16260.82 63.52 3935.96 807.59 4908.78 00:18:42.728 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCKRBSuG5l 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uCKRBSuG5l 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1128405 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1128405 /var/tmp/bdevperf.sock 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1128405 ']' 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.728 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.728 [2024-11-19 09:20:43.645737] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:42.728 [2024-11-19 09:20:43.645785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128405 ] 00:18:42.728 [2024-11-19 09:20:43.720961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.728 [2024-11-19 09:20:43.763063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.987 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:42.987 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:42.988 09:20:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uCKRBSuG5l 00:18:43.247 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.247 [2024-11-19 09:20:44.217964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.247 TLSTESTn1 00:18:43.506 09:20:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.506 Running I/O for 10 seconds... 00:18:45.379 5403.00 IOPS, 21.11 MiB/s [2024-11-19T08:20:47.815Z] 5350.00 IOPS, 20.90 MiB/s [2024-11-19T08:20:48.752Z] 5070.33 IOPS, 19.81 MiB/s [2024-11-19T08:20:49.696Z] 4907.50 IOPS, 19.17 MiB/s [2024-11-19T08:20:50.633Z] 4821.00 IOPS, 18.83 MiB/s [2024-11-19T08:20:51.571Z] 4749.67 IOPS, 18.55 MiB/s [2024-11-19T08:20:52.509Z] 4692.14 IOPS, 18.33 MiB/s [2024-11-19T08:20:53.446Z] 4633.62 IOPS, 18.10 MiB/s [2024-11-19T08:20:54.825Z] 4615.22 IOPS, 18.03 MiB/s [2024-11-19T08:20:54.825Z] 4595.20 IOPS, 17.95 MiB/s 00:18:53.766 Latency(us) 00:18:53.766 [2024-11-19T08:20:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.766 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.766 Verification LBA range: start 0x0 length 0x2000 00:18:53.766 TLSTESTn1 : 10.02 4598.33 17.96 0.00 0.00 27792.14 5898.24 22909.11 00:18:53.766 [2024-11-19T08:20:54.825Z] =================================================================================================================== 00:18:53.766 [2024-11-19T08:20:54.825Z] Total : 4598.33 17.96 0.00 0.00 27792.14 5898.24 22909.11 00:18:53.766 { 00:18:53.766 "results": [ 00:18:53.766 { 00:18:53.766 "job": "TLSTESTn1", 00:18:53.766 "core_mask": "0x4", 00:18:53.766 "workload": "verify", 00:18:53.766 "status": "finished", 00:18:53.766 "verify_range": { 00:18:53.766 "start": 0, 00:18:53.766 "length": 8192 00:18:53.766 }, 00:18:53.766 "queue_depth": 128, 00:18:53.766 "io_size": 4096, 00:18:53.766 "runtime": 10.021029, 00:18:53.766 "iops": 4598.330171482389, 00:18:53.766 "mibps": 17.962227232353083, 00:18:53.766 "io_failed": 0, 00:18:53.766 "io_timeout": 0, 00:18:53.766 "avg_latency_us": 27792.140057971017, 00:18:53.766 "min_latency_us": 5898.24, 00:18:53.766 "max_latency_us": 22909.106086956523 00:18:53.766 } 00:18:53.766 ], 00:18:53.766 "core_count": 1 00:18:53.766 } 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1128405 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1128405 ']' 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1128405 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1128405 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1128405' 00:18:53.766 killing process with pid 1128405 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1128405 00:18:53.766 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.766 00:18:53.766 Latency(us) 00:18:53.766 [2024-11-19T08:20:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.766 [2024-11-19T08:20:54.825Z] =================================================================================================================== 00:18:53.766 [2024-11-19T08:20:54.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1128405 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4EwbCTO8rp 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4EwbCTO8rp 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4EwbCTO8rp 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4EwbCTO8rp 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130239 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130239 /var/tmp/bdevperf.sock 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1130239 ']' 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.766 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.766 [2024-11-19 09:20:54.731484] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:53.766 [2024-11-19 09:20:54.731528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130239 ] 00:18:53.766 [2024-11-19 09:20:54.805422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.025 [2024-11-19 09:20:54.848064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.025 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.025 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:54.025 09:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4EwbCTO8rp 00:18:54.283 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.283 [2024-11-19 09:20:55.306600] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.283 [2024-11-19 09:20:55.316845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:54.283 [2024-11-19 09:20:55.317050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386170 (107): Transport endpoint is not connected 00:18:54.283 [2024-11-19 09:20:55.318042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1386170 (9): Bad file descriptor 00:18:54.283 [2024-11-19 09:20:55.319044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:54.283 [2024-11-19 09:20:55.319055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:54.283 [2024-11-19 09:20:55.319062] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:54.283 [2024-11-19 09:20:55.319072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:54.283 request: 00:18:54.283 { 00:18:54.283 "name": "TLSTEST", 00:18:54.283 "trtype": "tcp", 00:18:54.283 "traddr": "10.0.0.2", 00:18:54.283 "adrfam": "ipv4", 00:18:54.283 "trsvcid": "4420", 00:18:54.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.283 "prchk_reftag": false, 00:18:54.283 "prchk_guard": false, 00:18:54.283 "hdgst": false, 00:18:54.283 "ddgst": false, 00:18:54.283 "psk": "key0", 00:18:54.283 "allow_unrecognized_csi": false, 00:18:54.283 "method": "bdev_nvme_attach_controller", 00:18:54.283 "req_id": 1 00:18:54.283 } 00:18:54.284 Got JSON-RPC error response 00:18:54.284 response: 00:18:54.284 { 00:18:54.284 "code": -5, 00:18:54.284 "message": "Input/output error" 00:18:54.284 } 00:18:54.284 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1130239 00:18:54.284 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1130239 ']' 00:18:54.284 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1130239 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130239 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130239' 00:18:54.543 killing process with pid 1130239 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1130239 00:18:54.543 Received shutdown signal, test time was about 10.000000 seconds 00:18:54.543 00:18:54.543 Latency(us) 00:18:54.543 [2024-11-19T08:20:55.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.543 [2024-11-19T08:20:55.602Z] =================================================================================================================== 00:18:54.543 [2024-11-19T08:20:55.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1130239 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uCKRBSuG5l 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uCKRBSuG5l 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uCKRBSuG5l 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uCKRBSuG5l 00:18:54.543 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130464 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130464 /var/tmp/bdevperf.sock 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1130464 ']' 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.544 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.544 [2024-11-19 09:20:55.593172] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:54.544 [2024-11-19 09:20:55.593222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130464 ] 00:18:54.802 [2024-11-19 09:20:55.665970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.802 [2024-11-19 09:20:55.703648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.802 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.802 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:54.802 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uCKRBSuG5l 00:18:55.060 09:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:55.319 [2024-11-19 09:20:56.170857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.319 [2024-11-19 09:20:56.178936] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:55.319 [2024-11-19 09:20:56.178965] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:55.319 [2024-11-19 09:20:56.178988] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:55.319 [2024-11-19 09:20:56.179286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd85170 (107): Transport endpoint is not connected 00:18:55.319 [2024-11-19 09:20:56.180280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd85170 (9): Bad file descriptor 00:18:55.319 [2024-11-19 09:20:56.181281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:55.319 [2024-11-19 09:20:56.181292] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:55.319 [2024-11-19 09:20:56.181299] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:55.319 [2024-11-19 09:20:56.181309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:55.319 request: 00:18:55.319 { 00:18:55.319 "name": "TLSTEST", 00:18:55.319 "trtype": "tcp", 00:18:55.319 "traddr": "10.0.0.2", 00:18:55.319 "adrfam": "ipv4", 00:18:55.319 "trsvcid": "4420", 00:18:55.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.319 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:55.319 "prchk_reftag": false, 00:18:55.319 "prchk_guard": false, 00:18:55.319 "hdgst": false, 00:18:55.319 "ddgst": false, 00:18:55.319 "psk": "key0", 00:18:55.319 "allow_unrecognized_csi": false, 00:18:55.319 "method": "bdev_nvme_attach_controller", 00:18:55.319 "req_id": 1 00:18:55.319 } 00:18:55.319 Got JSON-RPC error response 00:18:55.319 response: 00:18:55.319 { 00:18:55.319 "code": -5, 00:18:55.319 "message": "Input/output error" 00:18:55.319 } 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1130464 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1130464 ']' 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1130464 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130464 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130464' 00:18:55.319 killing process with pid 1130464 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1130464 00:18:55.319 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.319 00:18:55.319 Latency(us) 00:18:55.319 [2024-11-19T08:20:56.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.319 [2024-11-19T08:20:56.378Z] =================================================================================================================== 00:18:55.319 [2024-11-19T08:20:56.378Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.319 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1130464 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCKRBSuG5l 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCKRBSuG5l 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uCKRBSuG5l 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.uCKRBSuG5l 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130487 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130487 /var/tmp/bdevperf.sock 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1130487 ']' 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.578 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.578 [2024-11-19 09:20:56.457832] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:55.578 [2024-11-19 09:20:56.457883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130487 ] 00:18:55.578 [2024-11-19 09:20:56.534650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.578 [2024-11-19 09:20:56.575010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.836 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.836 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:55.836 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.uCKRBSuG5l 00:18:55.836 09:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.095 [2024-11-19 09:20:57.054209] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.095 [2024-11-19 09:20:57.061191] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:56.095 [2024-11-19 09:20:57.061212] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:56.095 [2024-11-19 09:20:57.061236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:56.095 [2024-11-19 09:20:57.061593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20de170 (107): Transport endpoint is not connected 00:18:56.095 [2024-11-19 09:20:57.062587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20de170 (9): Bad file descriptor 00:18:56.095 [2024-11-19 09:20:57.063589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:56.095 [2024-11-19 09:20:57.063599] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:56.095 [2024-11-19 09:20:57.063607] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:56.095 [2024-11-19 09:20:57.063621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:56.095 request: 00:18:56.095 { 00:18:56.095 "name": "TLSTEST", 00:18:56.095 "trtype": "tcp", 00:18:56.095 "traddr": "10.0.0.2", 00:18:56.095 "adrfam": "ipv4", 00:18:56.095 "trsvcid": "4420", 00:18:56.095 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:56.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.095 "prchk_reftag": false, 00:18:56.095 "prchk_guard": false, 00:18:56.095 "hdgst": false, 00:18:56.095 "ddgst": false, 00:18:56.095 "psk": "key0", 00:18:56.095 "allow_unrecognized_csi": false, 00:18:56.095 "method": "bdev_nvme_attach_controller", 00:18:56.095 "req_id": 1 00:18:56.095 } 00:18:56.095 Got JSON-RPC error response 00:18:56.095 response: 00:18:56.095 { 00:18:56.095 "code": -5, 00:18:56.095 "message": "Input/output error" 00:18:56.095 } 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1130487 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1130487 ']' 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1130487 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130487 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130487' 00:18:56.095 killing process with pid 1130487 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1130487 00:18:56.095 Received shutdown signal, test time was about 10.000000 seconds 00:18:56.095 00:18:56.095 Latency(us) 00:18:56.095 [2024-11-19T08:20:57.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.095 [2024-11-19T08:20:57.154Z] =================================================================================================================== 00:18:56.095 [2024-11-19T08:20:57.154Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.095 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1130487 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1130718 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1130718 /var/tmp/bdevperf.sock 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1130718 ']' 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:56.354 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.354 [2024-11-19 09:20:57.333291] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:56.354 [2024-11-19 09:20:57.333338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130718 ] 00:18:56.354 [2024-11-19 09:20:57.395322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.611 [2024-11-19 09:20:57.432839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.611 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.611 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:56.612 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:56.869 [2024-11-19 09:20:57.699438] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:56.869 [2024-11-19 09:20:57.699471] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:56.869 request: 00:18:56.869 { 00:18:56.869 "name": "key0", 00:18:56.869 "path": "", 00:18:56.869 "method": "keyring_file_add_key", 00:18:56.869 "req_id": 1 00:18:56.869 } 00:18:56.869 Got JSON-RPC error response 00:18:56.869 response: 00:18:56.869 { 00:18:56.869 "code": -1, 00:18:56.869 "message": "Operation not permitted" 00:18:56.869 } 00:18:56.869 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.869 [2024-11-19 09:20:57.900048] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.869 [2024-11-19 09:20:57.900076] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:56.869 request: 00:18:56.869 { 00:18:56.869 "name": "TLSTEST", 00:18:56.869 "trtype": "tcp", 00:18:56.869 "traddr": "10.0.0.2", 00:18:56.869 "adrfam": "ipv4", 00:18:56.869 "trsvcid": "4420", 00:18:56.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.869 "prchk_reftag": false, 00:18:56.869 "prchk_guard": false, 00:18:56.869 "hdgst": false, 00:18:56.869 "ddgst": false, 00:18:56.869 "psk": "key0", 00:18:56.869 "allow_unrecognized_csi": false, 00:18:56.869 "method": "bdev_nvme_attach_controller", 00:18:56.869 "req_id": 1 00:18:56.869 } 00:18:56.869 Got JSON-RPC error response 00:18:56.869 response: 00:18:56.869 { 00:18:56.869 "code": -126, 00:18:56.869 "message": "Required key not available" 00:18:56.869 } 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1130718 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1130718 ']' 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1130718 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130718 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130718' 00:18:57.128 killing process with pid 1130718 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1130718 00:18:57.128 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.128 00:18:57.128 Latency(us) 00:18:57.128 [2024-11-19T08:20:58.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.128 [2024-11-19T08:20:58.187Z] =================================================================================================================== 00:18:57.128 [2024-11-19T08:20:58.187Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:57.128 09:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1130718 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1126044 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1126044 ']' 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1126044 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1126044 00:18:57.128 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:57.129 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:57.129 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1126044' 00:18:57.129 killing process with pid 1126044 00:18:57.129 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1126044 00:18:57.129 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1126044 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.mCU9m0uVNT 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.mCU9m0uVNT 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1130964 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1130964 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1130964 ']' 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.388 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.388 [2024-11-19 09:20:58.431763] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:57.388 [2024-11-19 09:20:58.431807] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.648 [2024-11-19 09:20:58.506153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.648 [2024-11-19 09:20:58.546972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.648 [2024-11-19 09:20:58.547009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.648 [2024-11-19 09:20:58.547016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.648 [2024-11-19 09:20:58.547022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.648 [2024-11-19 09:20:58.547027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.648 [2024-11-19 09:20:58.547592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.mCU9m0uVNT 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mCU9m0uVNT 00:18:57.648 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.907 [2024-11-19 09:20:58.854606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.907 09:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:58.166 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:58.424 [2024-11-19 09:20:59.239609] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.424 [2024-11-19 09:20:59.239802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.424 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:58.424 malloc0 00:18:58.425 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.683 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:18:58.942 09:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCU9m0uVNT 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mCU9m0uVNT 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1131231 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1131231 /var/tmp/bdevperf.sock 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1131231 ']' 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.201 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.201 [2024-11-19 09:21:00.066816] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:18:59.201 [2024-11-19 09:21:00.066871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131231 ] 00:18:59.201 [2024-11-19 09:21:00.141751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.201 [2024-11-19 09:21:00.183830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.459 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:59.459 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:59.459 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:18:59.459 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.717 [2024-11-19 09:21:00.634877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.717 TLSTESTn1 00:18:59.717 09:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.978 Running I/O for 10 seconds... 00:19:01.847 5124.00 IOPS, 20.02 MiB/s [2024-11-19T08:21:03.841Z] 5291.50 IOPS, 20.67 MiB/s [2024-11-19T08:21:05.215Z] 5337.67 IOPS, 20.85 MiB/s [2024-11-19T08:21:06.150Z] 5373.00 IOPS, 20.99 MiB/s [2024-11-19T08:21:07.085Z] 5387.80 IOPS, 21.05 MiB/s [2024-11-19T08:21:08.019Z] 5413.50 IOPS, 21.15 MiB/s [2024-11-19T08:21:08.952Z] 5407.00 IOPS, 21.12 MiB/s [2024-11-19T08:21:09.886Z] 5413.25 IOPS, 21.15 MiB/s [2024-11-19T08:21:11.260Z] 5426.11 IOPS, 21.20 MiB/s [2024-11-19T08:21:11.260Z] 5419.00 IOPS, 21.17 MiB/s 00:19:10.201 Latency(us) 00:19:10.201 [2024-11-19T08:21:11.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.201 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:10.201 Verification LBA range: start 0x0 length 0x2000 00:19:10.201 TLSTESTn1 : 10.02 5423.18 21.18 0.00 0.00 23564.60 4729.99 65194.07 00:19:10.201 [2024-11-19T08:21:11.260Z] =================================================================================================================== 00:19:10.201 [2024-11-19T08:21:11.260Z] Total : 5423.18 21.18 0.00 0.00 23564.60 4729.99 65194.07 00:19:10.201 { 00:19:10.201 "results": [ 00:19:10.201 { 00:19:10.201 "job": "TLSTESTn1", 00:19:10.201 "core_mask": "0x4", 00:19:10.201 "workload": "verify", 00:19:10.201 "status": "finished", 00:19:10.201 "verify_range": { 00:19:10.201 "start": 0, 00:19:10.201 "length": 8192 00:19:10.201 }, 00:19:10.201 "queue_depth": 128, 00:19:10.201 "io_size": 4096, 00:19:10.201 "runtime": 10.015719, 00:19:10.201 "iops": 5423.175310729065, 00:19:10.201 "mibps": 21.18427855753541, 00:19:10.201 "io_failed": 0, 00:19:10.201 "io_timeout": 0, 00:19:10.201 "avg_latency_us": 23564.6024791982, 00:19:10.201 "min_latency_us": 4729.989565217391, 00:19:10.201 "max_latency_us": 65194.07304347826 00:19:10.201 } 00:19:10.201 ], 00:19:10.201 "core_count": 1 00:19:10.201 } 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1131231 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1131231 ']' 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1131231 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1131231 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1131231' 00:19:10.201 killing process with pid 1131231 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1131231 00:19:10.201 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.201 00:19:10.201 Latency(us) 00:19:10.201 [2024-11-19T08:21:11.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.201 [2024-11-19T08:21:11.260Z] =================================================================================================================== 00:19:10.201 [2024-11-19T08:21:11.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.201 09:21:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1131231 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.mCU9m0uVNT 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCU9m0uVNT 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCU9m0uVNT 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mCU9m0uVNT 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mCU9m0uVNT 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1133519 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1133519 /var/tmp/bdevperf.sock 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1133519 ']' 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:10.201 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.201 [2024-11-19 09:21:11.142431] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:10.201 [2024-11-19 09:21:11.142478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133519 ] 00:19:10.201 [2024-11-19 09:21:11.216154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.460 [2024-11-19 09:21:11.258712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.460 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.460 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:10.460 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:10.718 [2024-11-19 09:21:11.524941] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mCU9m0uVNT': 0100666 00:19:10.718 [2024-11-19 09:21:11.524974] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:10.718 request: 00:19:10.718 { 00:19:10.718 "name": "key0", 00:19:10.718 "path": "/tmp/tmp.mCU9m0uVNT", 00:19:10.718 "method": "keyring_file_add_key", 00:19:10.718 "req_id": 1 00:19:10.718 } 00:19:10.718 Got JSON-RPC error response 00:19:10.718 response: 00:19:10.718 { 00:19:10.718 "code": -1, 00:19:10.718 "message": "Operation not permitted" 00:19:10.718 } 00:19:10.718 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.718 [2024-11-19 09:21:11.713507] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.718 [2024-11-19 09:21:11.713539] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:10.718 request: 00:19:10.718 { 00:19:10.718 "name": "TLSTEST", 00:19:10.719 "trtype": "tcp", 00:19:10.719 "traddr": "10.0.0.2", 00:19:10.719 "adrfam": "ipv4", 00:19:10.719 "trsvcid": "4420", 00:19:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.719 "prchk_reftag": false, 00:19:10.719 "prchk_guard": false, 00:19:10.719 "hdgst": false, 00:19:10.719 "ddgst": false, 00:19:10.719 "psk": "key0", 00:19:10.719 "allow_unrecognized_csi": false, 00:19:10.719 "method": "bdev_nvme_attach_controller", 00:19:10.719 "req_id": 1 00:19:10.719 } 00:19:10.719 Got JSON-RPC error response 00:19:10.719 response: 00:19:10.719 { 00:19:10.719 "code": -126, 00:19:10.719 "message": "Required key not available" 00:19:10.719 } 00:19:10.719 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1133519 00:19:10.719 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1133519 ']' 00:19:10.719 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1133519 00:19:10.719 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:10.719 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.719 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1133519 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1133519' 00:19:10.978 killing process with pid 1133519 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1133519 00:19:10.978 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.978 00:19:10.978 Latency(us) 00:19:10.978 [2024-11-19T08:21:12.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.978 [2024-11-19T08:21:12.037Z] =================================================================================================================== 00:19:10.978 [2024-11-19T08:21:12.037Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1133519 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1130964 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1130964 ']' 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1130964 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:10.978 09:21:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130964 00:19:10.978 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:10.978 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:10.978 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130964' 00:19:10.978 killing process with pid 1130964 00:19:10.978 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1130964 00:19:10.978 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1130964 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1133600 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1133600 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1133600 ']' 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.237 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.237 [2024-11-19 09:21:12.230896] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:11.237 [2024-11-19 09:21:12.230959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.495 [2024-11-19 09:21:12.309629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.495 [2024-11-19 09:21:12.348301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.495 [2024-11-19 09:21:12.348336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.495 [2024-11-19 09:21:12.348343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.495 [2024-11-19 09:21:12.348349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.495 [2024-11-19 09:21:12.348353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.495 [2024-11-19 09:21:12.348909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.mCU9m0uVNT 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.mCU9m0uVNT 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.mCU9m0uVNT 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mCU9m0uVNT 00:19:11.495 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.753 [2024-11-19 09:21:12.673169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.753 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:12.011 09:21:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:12.011 [2024-11-19 09:21:13.046129] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:12.011 [2024-11-19 09:21:13.046350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.011 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:12.270 malloc0 00:19:12.270 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.528 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:12.787 [2024-11-19 09:21:13.623852] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mCU9m0uVNT': 0100666 00:19:12.788 [2024-11-19 09:21:13.623885] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:12.788 request: 00:19:12.788 { 00:19:12.788 "name": "key0", 00:19:12.788 "path": "/tmp/tmp.mCU9m0uVNT", 00:19:12.788 "method": "keyring_file_add_key", 00:19:12.788 "req_id": 1 00:19:12.788 } 00:19:12.788 Got JSON-RPC error response 00:19:12.788 response: 00:19:12.788 { 00:19:12.788 "code": -1, 00:19:12.788 "message": "Operation not permitted" 00:19:12.788 } 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.788 [2024-11-19 09:21:13.816373] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:12.788 [2024-11-19 09:21:13.816411] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:12.788 request: 00:19:12.788 { 00:19:12.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.788 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.788 "psk": "key0", 00:19:12.788 "method": "nvmf_subsystem_add_host", 00:19:12.788 "req_id": 1 00:19:12.788 } 00:19:12.788 Got JSON-RPC error response 00:19:12.788 response: 00:19:12.788 { 00:19:12.788 "code": -32603, 00:19:12.788 "message": "Internal error" 00:19:12.788 } 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1133600 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1133600 ']' 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1133600 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:12.788 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1133600 00:19:13.047 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:13.047 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:13.047 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1133600' 00:19:13.047 killing process with pid 1133600 00:19:13.047 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1133600 00:19:13.047 09:21:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1133600 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.mCU9m0uVNT 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1134082 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1134082 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1134082 ']' 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:13.047 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.306 [2024-11-19 09:21:14.115039] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:13.306 [2024-11-19 09:21:14.115084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.306 [2024-11-19 09:21:14.190086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.306 [2024-11-19 09:21:14.230994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.306 [2024-11-19 09:21:14.231030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.306 [2024-11-19 09:21:14.231038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.306 [2024-11-19 09:21:14.231046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.306 [2024-11-19 09:21:14.231051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.306 [2024-11-19 09:21:14.231581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.mCU9m0uVNT 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mCU9m0uVNT 00:19:13.306 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:13.565 [2024-11-19 09:21:14.535995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.565 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:13.824 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:14.083 [2024-11-19 09:21:14.920982] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.083 [2024-11-19 09:21:14.921174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.083 09:21:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:14.083 malloc0 00:19:14.083 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.342 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:14.601 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.858 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.858 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1134334 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1134334 /var/tmp/bdevperf.sock 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1134334 ']' 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:14.859 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.859 [2024-11-19 09:21:15.728578] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:14.859 [2024-11-19 09:21:15.728624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134334 ] 00:19:14.859 [2024-11-19 09:21:15.803605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.859 [2024-11-19 09:21:15.843387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.116 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.116 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:15.116 09:21:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:15.116 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.374 [2024-11-19 09:21:16.314713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.374 TLSTESTn1 00:19:15.374 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:15.940 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:15.940 "subsystems": [ 00:19:15.940 { 00:19:15.940 "subsystem": "keyring", 00:19:15.940 "config": [ 00:19:15.940 { 00:19:15.940 "method": "keyring_file_add_key", 00:19:15.940 "params": { 00:19:15.940 "name": "key0", 00:19:15.940 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:15.940 } 00:19:15.940 } 00:19:15.940 ] 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "subsystem": "iobuf", 00:19:15.940 "config": [ 00:19:15.940 { 00:19:15.940 "method": "iobuf_set_options", 00:19:15.940 "params": { 00:19:15.940 "small_pool_count": 8192, 00:19:15.940 "large_pool_count": 1024, 00:19:15.940 "small_bufsize": 8192, 00:19:15.940 "large_bufsize": 135168, 00:19:15.940 "enable_numa": false 00:19:15.940 } 00:19:15.940 } 00:19:15.940 ] 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "subsystem": "sock", 00:19:15.940 "config": [ 00:19:15.940 { 00:19:15.940 "method": "sock_set_default_impl", 00:19:15.940 "params": { 00:19:15.940 "impl_name": "posix" 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "sock_impl_set_options", 00:19:15.940 "params": { 00:19:15.940 "impl_name": "ssl", 00:19:15.940 "recv_buf_size": 4096, 00:19:15.940 "send_buf_size": 4096, 00:19:15.940 "enable_recv_pipe": true, 00:19:15.940 "enable_quickack": false, 00:19:15.940 "enable_placement_id": 0, 00:19:15.940 "enable_zerocopy_send_server": true, 00:19:15.940 "enable_zerocopy_send_client": false, 00:19:15.940 "zerocopy_threshold": 0, 00:19:15.940 "tls_version": 0, 00:19:15.940 "enable_ktls": false 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "sock_impl_set_options", 00:19:15.940 "params": { 00:19:15.940 "impl_name": "posix", 00:19:15.940 "recv_buf_size": 2097152, 00:19:15.940 "send_buf_size": 2097152, 00:19:15.940 "enable_recv_pipe": true, 00:19:15.940 "enable_quickack": false, 00:19:15.940 "enable_placement_id": 0, 00:19:15.940 "enable_zerocopy_send_server": true, 00:19:15.940 "enable_zerocopy_send_client": false, 00:19:15.940 "zerocopy_threshold": 0, 00:19:15.940 "tls_version": 0, 00:19:15.940 "enable_ktls": false 00:19:15.940 } 00:19:15.940 } 00:19:15.940 ] 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "subsystem": "vmd", 00:19:15.940 "config": [] 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "subsystem": "accel", 00:19:15.940 "config": [ 00:19:15.940 { 00:19:15.940 "method": "accel_set_options", 00:19:15.940 "params": { 00:19:15.940 "small_cache_size": 128, 00:19:15.940 "large_cache_size": 16, 00:19:15.940 "task_count": 2048, 00:19:15.940 "sequence_count": 2048, 00:19:15.940 "buf_count": 2048 00:19:15.940 } 00:19:15.940 } 00:19:15.940 ] 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "subsystem": "bdev", 00:19:15.940 "config": [ 00:19:15.940 { 00:19:15.940 "method": "bdev_set_options", 00:19:15.940 "params": { 00:19:15.940 "bdev_io_pool_size": 65535, 00:19:15.940 "bdev_io_cache_size": 256, 00:19:15.940 "bdev_auto_examine": true, 00:19:15.940 "iobuf_small_cache_size": 128, 00:19:15.940 "iobuf_large_cache_size": 16 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "bdev_raid_set_options", 00:19:15.940 "params": { 00:19:15.940 "process_window_size_kb": 1024, 00:19:15.940 "process_max_bandwidth_mb_sec": 0 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "bdev_iscsi_set_options", 00:19:15.940 "params": { 00:19:15.940 "timeout_sec": 30 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "bdev_nvme_set_options", 00:19:15.940 "params": { 00:19:15.940 "action_on_timeout": "none", 00:19:15.940 "timeout_us": 0, 00:19:15.940 "timeout_admin_us": 0, 00:19:15.940 "keep_alive_timeout_ms": 10000, 00:19:15.940 "arbitration_burst": 0, 00:19:15.940 "low_priority_weight": 0, 00:19:15.940 "medium_priority_weight": 0, 00:19:15.940 "high_priority_weight": 0, 00:19:15.940 "nvme_adminq_poll_period_us": 10000, 00:19:15.940 "nvme_ioq_poll_period_us": 0, 00:19:15.940 "io_queue_requests": 0, 00:19:15.940 "delay_cmd_submit": true, 00:19:15.940 "transport_retry_count": 4, 00:19:15.940 "bdev_retry_count": 3, 00:19:15.940 "transport_ack_timeout": 0, 00:19:15.940 "ctrlr_loss_timeout_sec": 0, 00:19:15.940 "reconnect_delay_sec": 0, 00:19:15.940 "fast_io_fail_timeout_sec": 0, 00:19:15.940 "disable_auto_failback": false, 00:19:15.940 "generate_uuids": false, 00:19:15.940 "transport_tos": 0, 00:19:15.940 "nvme_error_stat": false, 00:19:15.940 "rdma_srq_size": 0, 00:19:15.940 "io_path_stat": false, 00:19:15.940 "allow_accel_sequence": false, 00:19:15.940 "rdma_max_cq_size": 0, 00:19:15.940 "rdma_cm_event_timeout_ms": 0, 00:19:15.940 "dhchap_digests": [ 00:19:15.940 "sha256", 00:19:15.940 "sha384", 00:19:15.940 "sha512" 00:19:15.940 ], 00:19:15.940 "dhchap_dhgroups": [ 00:19:15.940 "null", 00:19:15.940 "ffdhe2048", 00:19:15.940 "ffdhe3072", 00:19:15.940 "ffdhe4096", 00:19:15.940 "ffdhe6144", 00:19:15.940 "ffdhe8192" 00:19:15.940 ] 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "bdev_nvme_set_hotplug", 00:19:15.940 "params": { 00:19:15.940 "period_us": 100000, 00:19:15.940 "enable": false 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "bdev_malloc_create", 00:19:15.940 "params": { 00:19:15.940 "name": "malloc0", 00:19:15.940 "num_blocks": 8192, 00:19:15.940 "block_size": 4096, 00:19:15.940 "physical_block_size": 4096, 00:19:15.940 "uuid": "85dd2534-4ba8-4a7a-9199-7f79d9467fac", 00:19:15.940 "optimal_io_boundary": 0, 00:19:15.940 "md_size": 0, 00:19:15.940 "dif_type": 0, 00:19:15.940 "dif_is_head_of_md": false, 00:19:15.940 "dif_pi_format": 0 00:19:15.940 } 00:19:15.940 }, 00:19:15.940 { 00:19:15.940 "method": "bdev_wait_for_examine" 00:19:15.940 } 00:19:15.940 ] 00:19:15.940 }, 00:19:15.940 { 00:19:15.941 "subsystem": "nbd", 00:19:15.941 "config": [] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "scheduler", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "framework_set_scheduler", 00:19:15.941 "params": { 00:19:15.941 "name": "static" 00:19:15.941 } 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "nvmf", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "nvmf_set_config", 00:19:15.941 "params": { 00:19:15.941 "discovery_filter": "match_any", 00:19:15.941 "admin_cmd_passthru": { 00:19:15.941 "identify_ctrlr": false 00:19:15.941 }, 00:19:15.941 "dhchap_digests": [ 00:19:15.941 "sha256", 00:19:15.941 "sha384", 00:19:15.941 "sha512" 00:19:15.941 ], 00:19:15.941 "dhchap_dhgroups": [ 00:19:15.941 "null", 00:19:15.941 "ffdhe2048", 00:19:15.941 "ffdhe3072", 00:19:15.941 "ffdhe4096", 00:19:15.941 "ffdhe6144", 00:19:15.941 "ffdhe8192" 00:19:15.941 ] 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_set_max_subsystems", 00:19:15.941 "params": { 00:19:15.941 "max_subsystems": 1024 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_set_crdt", 00:19:15.941 "params": { 00:19:15.941 "crdt1": 0, 00:19:15.941 "crdt2": 0, 00:19:15.941 "crdt3": 0 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_create_transport", 00:19:15.941 "params": { 00:19:15.941 "trtype": "TCP", 00:19:15.941 "max_queue_depth": 128, 00:19:15.941 "max_io_qpairs_per_ctrlr": 127, 00:19:15.941 "in_capsule_data_size": 4096, 00:19:15.941 "max_io_size": 131072, 00:19:15.941 "io_unit_size": 131072, 00:19:15.941 "max_aq_depth": 128, 00:19:15.941 "num_shared_buffers": 511, 00:19:15.941 "buf_cache_size": 4294967295, 00:19:15.941 "dif_insert_or_strip": false, 00:19:15.941 "zcopy": false, 00:19:15.941 "c2h_success": false, 00:19:15.941 "sock_priority": 0, 00:19:15.941 "abort_timeout_sec": 1, 00:19:15.941 "ack_timeout": 0, 00:19:15.941 "data_wr_pool_size": 0 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_create_subsystem", 00:19:15.941 "params": { 00:19:15.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.941 "allow_any_host": false, 00:19:15.941 "serial_number": "SPDK00000000000001", 00:19:15.941 "model_number": "SPDK bdev Controller", 00:19:15.941 "max_namespaces": 10, 00:19:15.941 "min_cntlid": 1, 00:19:15.941 "max_cntlid": 65519, 00:19:15.941 "ana_reporting": false 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_subsystem_add_host", 00:19:15.941 "params": { 00:19:15.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.941 "host": "nqn.2016-06.io.spdk:host1", 00:19:15.941 "psk": "key0" 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_subsystem_add_ns", 00:19:15.941 "params": { 00:19:15.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.941 "namespace": { 00:19:15.941 "nsid": 1, 00:19:15.941 "bdev_name": "malloc0", 00:19:15.941 "nguid": "85DD25344BA84A7A91997F79D9467FAC", 00:19:15.941 "uuid": "85dd2534-4ba8-4a7a-9199-7f79d9467fac", 00:19:15.941 "no_auto_visible": false 00:19:15.941 } 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "nvmf_subsystem_add_listener", 00:19:15.941 "params": { 00:19:15.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.941 "listen_address": { 00:19:15.941 "trtype": "TCP", 00:19:15.941 "adrfam": "IPv4", 00:19:15.941 "traddr": "10.0.0.2", 00:19:15.941 "trsvcid": "4420" 00:19:15.941 }, 00:19:15.941 "secure_channel": true 00:19:15.941 } 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 }' 00:19:15.941 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:15.941 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:15.941 "subsystems": [ 00:19:15.941 { 00:19:15.941 "subsystem": "keyring", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "keyring_file_add_key", 00:19:15.941 "params": { 00:19:15.941 "name": "key0", 00:19:15.941 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:15.941 } 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "iobuf", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "iobuf_set_options", 00:19:15.941 "params": { 00:19:15.941 "small_pool_count": 8192, 00:19:15.941 "large_pool_count": 1024, 00:19:15.941 "small_bufsize": 8192, 00:19:15.941 "large_bufsize": 135168, 00:19:15.941 "enable_numa": false 00:19:15.941 } 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "sock", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "sock_set_default_impl", 00:19:15.941 "params": { 00:19:15.941 "impl_name": "posix" 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "sock_impl_set_options", 00:19:15.941 "params": { 00:19:15.941 "impl_name": "ssl", 00:19:15.941 "recv_buf_size": 4096, 00:19:15.941 "send_buf_size": 4096, 00:19:15.941 "enable_recv_pipe": true, 00:19:15.941 "enable_quickack": false, 00:19:15.941 "enable_placement_id": 0, 00:19:15.941 "enable_zerocopy_send_server": true, 00:19:15.941 "enable_zerocopy_send_client": false, 00:19:15.941 "zerocopy_threshold": 0, 00:19:15.941 "tls_version": 0, 00:19:15.941 "enable_ktls": false 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "sock_impl_set_options", 00:19:15.941 "params": { 00:19:15.941 "impl_name": "posix", 00:19:15.941 "recv_buf_size": 2097152, 00:19:15.941 "send_buf_size": 2097152, 00:19:15.941 "enable_recv_pipe": true, 00:19:15.941 "enable_quickack": false, 00:19:15.941 "enable_placement_id": 0, 00:19:15.941 "enable_zerocopy_send_server": true, 00:19:15.941 "enable_zerocopy_send_client": false, 00:19:15.941 "zerocopy_threshold": 0, 00:19:15.941 "tls_version": 0, 00:19:15.941 "enable_ktls": false 00:19:15.941 } 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "vmd", 00:19:15.941 "config": [] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "accel", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "accel_set_options", 00:19:15.941 "params": { 00:19:15.941 "small_cache_size": 128, 00:19:15.941 "large_cache_size": 16, 00:19:15.941 "task_count": 2048, 00:19:15.941 "sequence_count": 2048, 00:19:15.941 "buf_count": 2048 00:19:15.941 } 00:19:15.941 } 00:19:15.941 ] 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "subsystem": "bdev", 00:19:15.941 "config": [ 00:19:15.941 { 00:19:15.941 "method": "bdev_set_options", 00:19:15.941 "params": { 00:19:15.941 "bdev_io_pool_size": 65535, 00:19:15.941 "bdev_io_cache_size": 256, 00:19:15.941 "bdev_auto_examine": true, 00:19:15.941 "iobuf_small_cache_size": 128, 00:19:15.941 "iobuf_large_cache_size": 16 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "bdev_raid_set_options", 00:19:15.941 "params": { 00:19:15.941 "process_window_size_kb": 1024, 00:19:15.941 "process_max_bandwidth_mb_sec": 0 00:19:15.941 } 00:19:15.941 }, 00:19:15.941 { 00:19:15.941 "method": "bdev_iscsi_set_options", 00:19:15.942 "params": { 00:19:15.942 "timeout_sec": 30 00:19:15.942 } 00:19:15.942 }, 00:19:15.942 { 00:19:15.942 "method": "bdev_nvme_set_options", 00:19:15.942 "params": { 00:19:15.942 "action_on_timeout": "none", 00:19:15.942 "timeout_us": 0, 00:19:15.942 "timeout_admin_us": 0, 00:19:15.942 "keep_alive_timeout_ms": 10000, 00:19:15.942 "arbitration_burst": 0, 00:19:15.942 "low_priority_weight": 0, 00:19:15.942 "medium_priority_weight": 0, 00:19:15.942 "high_priority_weight": 0, 00:19:15.942 "nvme_adminq_poll_period_us": 10000, 00:19:15.942 "nvme_ioq_poll_period_us": 0, 00:19:15.942 "io_queue_requests": 512, 00:19:15.942 "delay_cmd_submit": true, 00:19:15.942 "transport_retry_count": 4, 00:19:15.942 "bdev_retry_count": 3, 00:19:15.942 "transport_ack_timeout": 0, 00:19:15.942 "ctrlr_loss_timeout_sec": 0, 00:19:15.942 "reconnect_delay_sec": 0, 00:19:15.942 "fast_io_fail_timeout_sec": 0, 00:19:15.942 "disable_auto_failback": false, 00:19:15.942 "generate_uuids": false, 00:19:15.942 "transport_tos": 0, 00:19:15.942 "nvme_error_stat": false, 00:19:15.942 "rdma_srq_size": 0, 00:19:15.942 "io_path_stat": false, 00:19:15.942 "allow_accel_sequence": false, 00:19:15.942 "rdma_max_cq_size": 0, 00:19:15.942 "rdma_cm_event_timeout_ms": 0, 00:19:15.942 "dhchap_digests": [ 00:19:15.942 "sha256", 00:19:15.942 "sha384", 00:19:15.942 "sha512" 00:19:15.942 ], 00:19:15.942 "dhchap_dhgroups": [ 00:19:15.942 "null", 00:19:15.942 "ffdhe2048", 00:19:15.942 "ffdhe3072", 00:19:15.942 "ffdhe4096", 00:19:15.942 "ffdhe6144", 00:19:15.942 "ffdhe8192" 00:19:15.942 ] 00:19:15.942 } 00:19:15.942 }, 00:19:15.942 { 00:19:15.942 "method": "bdev_nvme_attach_controller", 00:19:15.942 "params": { 00:19:15.942 "name": "TLSTEST", 00:19:15.942 "trtype": "TCP", 00:19:15.942 "adrfam": "IPv4", 00:19:15.942 "traddr": "10.0.0.2", 00:19:15.942 "trsvcid": "4420", 00:19:15.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.942 "prchk_reftag": false, 00:19:15.942 "prchk_guard": false, 00:19:15.942 "ctrlr_loss_timeout_sec": 0, 00:19:15.942 "reconnect_delay_sec": 0, 00:19:15.942 "fast_io_fail_timeout_sec": 0, 00:19:15.942 "psk": "key0", 00:19:15.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.942 "hdgst": false, 00:19:15.942 "ddgst": false, 00:19:15.942 "multipath": "multipath" 00:19:15.942 } 00:19:15.942 }, 00:19:15.942 { 00:19:15.942 "method": "bdev_nvme_set_hotplug", 00:19:15.942 "params": { 00:19:15.942 "period_us": 100000, 00:19:15.942 "enable": false 00:19:15.942 } 00:19:15.942 }, 00:19:15.942 { 00:19:15.942 "method": "bdev_wait_for_examine" 00:19:15.942 } 00:19:15.942 ] 00:19:15.942 }, 00:19:15.942 { 00:19:15.942 "subsystem": "nbd", 00:19:15.942 "config": [] 00:19:15.942 } 00:19:15.942 ] 00:19:15.942 }' 00:19:15.942 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1134334 00:19:15.942 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1134334 ']' 00:19:15.942 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1134334 00:19:15.942 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:15.942 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:15.942 09:21:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1134334 00:19:16.200 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1134334' 00:19:16.201 killing process with pid 1134334 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1134334 00:19:16.201 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.201 00:19:16.201 Latency(us) 00:19:16.201 [2024-11-19T08:21:17.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.201 [2024-11-19T08:21:17.260Z] =================================================================================================================== 00:19:16.201 [2024-11-19T08:21:17.260Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1134334 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1134082 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1134082 ']' 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1134082 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1134082 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1134082' 00:19:16.201 killing process with pid 1134082 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1134082 00:19:16.201 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1134082 00:19:16.461 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:16.461 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.461 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.461 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:16.461 "subsystems": [ 00:19:16.461 { 00:19:16.461 "subsystem": "keyring", 00:19:16.461 "config": [ 00:19:16.461 { 00:19:16.461 "method": "keyring_file_add_key", 00:19:16.461 "params": { 00:19:16.461 "name": "key0", 00:19:16.461 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:16.461 } 00:19:16.461 } 00:19:16.461 ] 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "subsystem": "iobuf", 00:19:16.461 "config": [ 00:19:16.461 { 00:19:16.461 "method": "iobuf_set_options", 00:19:16.461 "params": { 00:19:16.461 "small_pool_count": 8192, 00:19:16.461 "large_pool_count": 1024, 00:19:16.461 "small_bufsize": 8192, 00:19:16.461 "large_bufsize": 135168, 00:19:16.461 "enable_numa": false 00:19:16.461 } 00:19:16.461 } 00:19:16.461 ] 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "subsystem": "sock", 00:19:16.461 "config": [ 00:19:16.461 { 00:19:16.461 "method": "sock_set_default_impl", 00:19:16.461 "params": { 00:19:16.461 "impl_name": "posix" 00:19:16.461 } 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "method": "sock_impl_set_options", 00:19:16.461 "params": { 00:19:16.461 "impl_name": "ssl", 00:19:16.461 "recv_buf_size": 4096, 00:19:16.461 "send_buf_size": 4096, 00:19:16.461 "enable_recv_pipe": true, 00:19:16.461 "enable_quickack": false, 00:19:16.461 "enable_placement_id": 0, 00:19:16.461 "enable_zerocopy_send_server": true, 00:19:16.461 "enable_zerocopy_send_client": false, 00:19:16.461 "zerocopy_threshold": 0, 00:19:16.461 "tls_version": 0, 00:19:16.461 "enable_ktls": false 00:19:16.461 } 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "method": "sock_impl_set_options", 00:19:16.461 "params": { 00:19:16.461 "impl_name": "posix", 00:19:16.461 "recv_buf_size": 2097152, 00:19:16.461 "send_buf_size": 2097152, 00:19:16.461 "enable_recv_pipe": true, 00:19:16.461 "enable_quickack": false, 00:19:16.461 "enable_placement_id": 0, 00:19:16.461 "enable_zerocopy_send_server": true, 00:19:16.461 "enable_zerocopy_send_client": false, 00:19:16.461 "zerocopy_threshold": 0, 00:19:16.461 "tls_version": 0, 00:19:16.461 "enable_ktls": false 00:19:16.461 } 00:19:16.461 } 00:19:16.461 ] 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "subsystem": "vmd", 00:19:16.461 "config": [] 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "subsystem": "accel", 00:19:16.461 "config": [ 00:19:16.461 { 00:19:16.461 "method": "accel_set_options", 00:19:16.461 "params": { 00:19:16.461 "small_cache_size": 128, 00:19:16.461 "large_cache_size": 16, 00:19:16.461 "task_count": 2048, 00:19:16.461 "sequence_count": 2048, 00:19:16.461 "buf_count": 2048 00:19:16.461 } 00:19:16.461 } 00:19:16.461 ] 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "subsystem": "bdev", 00:19:16.461 "config": [ 00:19:16.461 { 00:19:16.461 "method": "bdev_set_options", 00:19:16.461 "params": { 00:19:16.461 "bdev_io_pool_size": 65535, 00:19:16.461 "bdev_io_cache_size": 256, 00:19:16.461 "bdev_auto_examine": true, 00:19:16.461 "iobuf_small_cache_size": 128, 00:19:16.461 "iobuf_large_cache_size": 16 00:19:16.461 } 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "method": "bdev_raid_set_options", 00:19:16.461 "params": { 00:19:16.461 "process_window_size_kb": 1024, 00:19:16.461 "process_max_bandwidth_mb_sec": 0 00:19:16.461 } 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "method": "bdev_iscsi_set_options", 00:19:16.461 "params": { 00:19:16.461 "timeout_sec": 30 00:19:16.461 } 00:19:16.461 }, 00:19:16.461 { 00:19:16.461 "method": "bdev_nvme_set_options", 00:19:16.461 "params": { 00:19:16.461 "action_on_timeout": "none", 00:19:16.461 "timeout_us": 0, 00:19:16.461 "timeout_admin_us": 0, 00:19:16.461 "keep_alive_timeout_ms": 10000, 00:19:16.461 "arbitration_burst": 0, 00:19:16.461 "low_priority_weight": 0, 00:19:16.461 "medium_priority_weight": 0, 00:19:16.461 "high_priority_weight": 0, 00:19:16.461 "nvme_adminq_poll_period_us": 10000, 00:19:16.461 "nvme_ioq_poll_period_us": 0, 00:19:16.461 "io_queue_requests": 0, 00:19:16.461 "delay_cmd_submit": true, 00:19:16.461 "transport_retry_count": 4, 00:19:16.461 "bdev_retry_count": 3, 00:19:16.461 "transport_ack_timeout": 0, 00:19:16.461 "ctrlr_loss_timeout_sec": 0, 00:19:16.461 "reconnect_delay_sec": 0, 00:19:16.461 "fast_io_fail_timeout_sec": 0, 00:19:16.461 "disable_auto_failback": false, 00:19:16.461 "generate_uuids": false, 00:19:16.461 "transport_tos": 0, 00:19:16.461 "nvme_error_stat": false, 00:19:16.461 "rdma_srq_size": 0, 00:19:16.461 "io_path_stat": false, 00:19:16.461 "allow_accel_sequence": false, 00:19:16.461 "rdma_max_cq_size": 0, 00:19:16.461 "rdma_cm_event_timeout_ms": 0, 00:19:16.461 "dhchap_digests": [ 00:19:16.461 "sha256", 00:19:16.461 "sha384", 00:19:16.461 "sha512" 00:19:16.461 ], 00:19:16.461 "dhchap_dhgroups": [ 00:19:16.461 "null", 00:19:16.461 "ffdhe2048", 00:19:16.461 "ffdhe3072", 00:19:16.461 "ffdhe4096", 00:19:16.461 "ffdhe6144", 00:19:16.462 "ffdhe8192" 00:19:16.462 ] 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "bdev_nvme_set_hotplug", 00:19:16.462 "params": { 00:19:16.462 "period_us": 100000, 00:19:16.462 "enable": false 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "bdev_malloc_create", 00:19:16.462 "params": { 00:19:16.462 "name": "malloc0", 00:19:16.462 "num_blocks": 8192, 00:19:16.462 "block_size": 4096, 00:19:16.462 "physical_block_size": 4096, 00:19:16.462 "uuid": "85dd2534-4ba8-4a7a-9199-7f79d9467fac", 00:19:16.462 "optimal_io_boundary": 0, 00:19:16.462 "md_size": 0, 00:19:16.462 "dif_type": 0, 00:19:16.462 "dif_is_head_of_md": false, 00:19:16.462 "dif_pi_format": 0 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "bdev_wait_for_examine" 00:19:16.462 } 00:19:16.462 ] 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "subsystem": "nbd", 00:19:16.462 "config": [] 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "subsystem": "scheduler", 00:19:16.462 "config": [ 00:19:16.462 { 00:19:16.462 "method": "framework_set_scheduler", 00:19:16.462 "params": { 00:19:16.462 "name": "static" 00:19:16.462 } 00:19:16.462 } 00:19:16.462 ] 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "subsystem": "nvmf", 00:19:16.462 "config": [ 00:19:16.462 { 00:19:16.462 "method": "nvmf_set_config", 00:19:16.462 "params": { 00:19:16.462 "discovery_filter": "match_any", 00:19:16.462 "admin_cmd_passthru": { 00:19:16.462 "identify_ctrlr": false 00:19:16.462 }, 00:19:16.462 "dhchap_digests": [ 00:19:16.462 "sha256", 00:19:16.462 "sha384", 00:19:16.462 "sha512" 00:19:16.462 ], 00:19:16.462 "dhchap_dhgroups": [ 00:19:16.462 "null", 00:19:16.462 "ffdhe2048", 00:19:16.462 "ffdhe3072", 00:19:16.462 "ffdhe4096", 00:19:16.462 "ffdhe6144", 00:19:16.462 "ffdhe8192" 00:19:16.462 ] 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_set_max_subsystems", 00:19:16.462 "params": { 00:19:16.462 "max_subsystems": 1024 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_set_crdt", 00:19:16.462 "params": { 00:19:16.462 "crdt1": 0, 00:19:16.462 "crdt2": 0, 00:19:16.462 "crdt3": 0 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_create_transport", 00:19:16.462 "params": { 00:19:16.462 "trtype": "TCP", 00:19:16.462 "max_queue_depth": 128, 00:19:16.462 "max_io_qpairs_per_ctrlr": 127, 00:19:16.462 "in_capsule_data_size": 4096, 00:19:16.462 "max_io_size": 131072, 00:19:16.462 "io_unit_size": 131072, 00:19:16.462 "max_aq_depth": 128, 00:19:16.462 "num_shared_buffers": 511, 00:19:16.462 "buf_cache_size": 4294967295, 00:19:16.462 "dif_insert_or_strip": false, 00:19:16.462 "zcopy": false, 00:19:16.462 "c2h_success": false, 00:19:16.462 "sock_priority": 0, 00:19:16.462 "abort_timeout_sec": 1, 00:19:16.462 "ack_timeout": 0, 00:19:16.462 "data_wr_pool_size": 0 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_create_subsystem", 00:19:16.462 "params": { 00:19:16.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.462 "allow_any_host": false, 00:19:16.462 "serial_number": "SPDK00000000000001", 00:19:16.462 "model_number": "SPDK bdev Controller", 00:19:16.462 "max_namespaces": 10, 00:19:16.462 "min_cntlid": 1, 00:19:16.462 "max_cntlid": 65519, 00:19:16.462 "ana_reporting": false 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_subsystem_add_host", 00:19:16.462 "params": { 00:19:16.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.462 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.462 "psk": "key0" 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_subsystem_add_ns", 00:19:16.462 "params": { 00:19:16.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.462 "namespace": { 00:19:16.462 "nsid": 1, 00:19:16.462 "bdev_name": "malloc0", 00:19:16.462 "nguid": "85DD25344BA84A7A91997F79D9467FAC", 00:19:16.462 "uuid": "85dd2534-4ba8-4a7a-9199-7f79d9467fac", 00:19:16.462 "no_auto_visible": false 00:19:16.462 } 00:19:16.462 } 00:19:16.462 }, 00:19:16.462 { 00:19:16.462 "method": "nvmf_subsystem_add_listener", 00:19:16.462 "params": { 00:19:16.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.462 "listen_address": { 00:19:16.462 "trtype": "TCP", 00:19:16.462 "adrfam": "IPv4", 00:19:16.462 "traddr": "10.0.0.2", 00:19:16.462 "trsvcid": "4420" 00:19:16.462 }, 00:19:16.462 "secure_channel": true 00:19:16.462 } 00:19:16.462 } 00:19:16.462 ] 00:19:16.462 } 00:19:16.462 ] 00:19:16.462 }' 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1134590 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1134590 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1134590 ']' 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:16.462 09:21:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.462 [2024-11-19 09:21:17.471910] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:16.462 [2024-11-19 09:21:17.471965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.721 [2024-11-19 09:21:17.548964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.721 [2024-11-19 09:21:17.589768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.721 [2024-11-19 09:21:17.589804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.721 [2024-11-19 09:21:17.589811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.721 [2024-11-19 09:21:17.589817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.721 [2024-11-19 09:21:17.589822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.721 [2024-11-19 09:21:17.590411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.980 [2024-11-19 09:21:17.803470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:16.980 [2024-11-19 09:21:17.835492] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.980 [2024-11-19 09:21:17.835682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1134835 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1134835 /var/tmp/bdevperf.sock 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1134835 ']' 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.549 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:17.549 "subsystems": [ 00:19:17.549 { 00:19:17.549 "subsystem": "keyring", 00:19:17.549 "config": [ 00:19:17.549 { 00:19:17.549 "method": "keyring_file_add_key", 00:19:17.549 "params": { 00:19:17.549 "name": "key0", 00:19:17.549 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:17.549 } 00:19:17.549 } 00:19:17.549 ] 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "subsystem": "iobuf", 00:19:17.549 "config": [ 00:19:17.549 { 00:19:17.549 "method": "iobuf_set_options", 00:19:17.549 "params": { 00:19:17.549 "small_pool_count": 8192, 00:19:17.549 "large_pool_count": 1024, 00:19:17.549 "small_bufsize": 8192, 00:19:17.549 "large_bufsize": 135168, 00:19:17.549 "enable_numa": false 00:19:17.549 } 00:19:17.549 } 00:19:17.549 ] 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "subsystem": "sock", 00:19:17.549 "config": [ 00:19:17.549 { 00:19:17.549 "method": "sock_set_default_impl", 00:19:17.549 "params": { 00:19:17.549 "impl_name": "posix" 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "sock_impl_set_options", 00:19:17.549 "params": { 00:19:17.549 "impl_name": "ssl", 00:19:17.549 "recv_buf_size": 4096, 00:19:17.549 "send_buf_size": 4096, 00:19:17.549 "enable_recv_pipe": true, 00:19:17.549 "enable_quickack": false, 00:19:17.549 "enable_placement_id": 0, 00:19:17.549 "enable_zerocopy_send_server": true, 00:19:17.549 "enable_zerocopy_send_client": false, 00:19:17.549 "zerocopy_threshold": 0, 00:19:17.549 "tls_version": 0, 00:19:17.549 "enable_ktls": false 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "sock_impl_set_options", 00:19:17.549 "params": { 00:19:17.549 "impl_name": "posix", 00:19:17.549 "recv_buf_size": 2097152, 00:19:17.549 "send_buf_size": 2097152, 00:19:17.549 "enable_recv_pipe": true, 00:19:17.549 "enable_quickack": false, 00:19:17.549 "enable_placement_id": 0, 00:19:17.549 "enable_zerocopy_send_server": true, 00:19:17.549 "enable_zerocopy_send_client": false, 00:19:17.549 "zerocopy_threshold": 0, 00:19:17.549 "tls_version": 0, 00:19:17.549 "enable_ktls": false 00:19:17.549 } 00:19:17.549 } 00:19:17.549 ] 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "subsystem": "vmd", 00:19:17.549 "config": [] 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "subsystem": "accel", 00:19:17.549 "config": [ 00:19:17.549 { 00:19:17.549 "method": "accel_set_options", 00:19:17.549 "params": { 00:19:17.549 "small_cache_size": 128, 00:19:17.549 "large_cache_size": 16, 00:19:17.549 "task_count": 2048, 00:19:17.549 "sequence_count": 2048, 00:19:17.549 "buf_count": 2048 00:19:17.549 } 00:19:17.549 } 00:19:17.549 ] 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "subsystem": "bdev", 00:19:17.549 "config": [ 00:19:17.549 { 00:19:17.549 "method": "bdev_set_options", 00:19:17.549 "params": { 00:19:17.549 "bdev_io_pool_size": 65535, 00:19:17.549 "bdev_io_cache_size": 256, 00:19:17.549 "bdev_auto_examine": true, 00:19:17.549 "iobuf_small_cache_size": 128, 00:19:17.549 "iobuf_large_cache_size": 16 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "bdev_raid_set_options", 00:19:17.549 "params": { 00:19:17.549 "process_window_size_kb": 1024, 00:19:17.549 "process_max_bandwidth_mb_sec": 0 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "bdev_iscsi_set_options", 00:19:17.549 "params": { 00:19:17.549 "timeout_sec": 30 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "bdev_nvme_set_options", 00:19:17.549 "params": { 00:19:17.549 "action_on_timeout": "none", 00:19:17.549 "timeout_us": 0, 00:19:17.549 "timeout_admin_us": 0, 00:19:17.549 "keep_alive_timeout_ms": 10000, 00:19:17.549 "arbitration_burst": 0, 00:19:17.549 "low_priority_weight": 0, 00:19:17.549 "medium_priority_weight": 0, 00:19:17.549 "high_priority_weight": 0, 00:19:17.549 "nvme_adminq_poll_period_us": 10000, 00:19:17.549 "nvme_ioq_poll_period_us": 0, 00:19:17.549 "io_queue_requests": 512, 00:19:17.549 "delay_cmd_submit": true, 00:19:17.549 "transport_retry_count": 4, 00:19:17.549 "bdev_retry_count": 3, 00:19:17.549 "transport_ack_timeout": 0, 00:19:17.549 "ctrlr_loss_timeout_sec": 0, 00:19:17.549 "reconnect_delay_sec": 0, 00:19:17.549 "fast_io_fail_timeout_sec": 0, 00:19:17.549 "disable_auto_failback": false, 00:19:17.549 "generate_uuids": false, 00:19:17.549 "transport_tos": 0, 00:19:17.549 "nvme_error_stat": false, 00:19:17.549 "rdma_srq_size": 0, 00:19:17.549 "io_path_stat": false, 00:19:17.549 "allow_accel_sequence": false, 00:19:17.549 "rdma_max_cq_size": 0, 00:19:17.549 "rdma_cm_event_timeout_ms": 0, 00:19:17.549 "dhchap_digests": [ 00:19:17.549 "sha256", 00:19:17.549 "sha384", 00:19:17.549 "sha512" 00:19:17.549 ], 00:19:17.549 "dhchap_dhgroups": [ 00:19:17.549 "null", 00:19:17.549 "ffdhe2048", 00:19:17.549 "ffdhe3072", 00:19:17.549 "ffdhe4096", 00:19:17.549 "ffdhe6144", 00:19:17.549 "ffdhe8192" 00:19:17.549 ] 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "bdev_nvme_attach_controller", 00:19:17.549 "params": { 00:19:17.549 "name": "TLSTEST", 00:19:17.549 "trtype": "TCP", 00:19:17.549 "adrfam": "IPv4", 00:19:17.549 "traddr": "10.0.0.2", 00:19:17.549 "trsvcid": "4420", 00:19:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.549 "prchk_reftag": false, 00:19:17.549 "prchk_guard": false, 00:19:17.549 "ctrlr_loss_timeout_sec": 0, 00:19:17.549 "reconnect_delay_sec": 0, 00:19:17.549 "fast_io_fail_timeout_sec": 0, 00:19:17.549 "psk": "key0", 00:19:17.549 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.549 "hdgst": false, 00:19:17.549 "ddgst": false, 00:19:17.549 "multipath": "multipath" 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "bdev_nvme_set_hotplug", 00:19:17.549 "params": { 00:19:17.549 "period_us": 100000, 00:19:17.549 "enable": false 00:19:17.549 } 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "method": "bdev_wait_for_examine" 00:19:17.549 } 00:19:17.549 ] 00:19:17.549 }, 00:19:17.549 { 00:19:17.549 "subsystem": "nbd", 00:19:17.549 "config": [] 00:19:17.549 } 00:19:17.549 ] 00:19:17.549 }' 00:19:17.550 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.550 09:21:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.550 [2024-11-19 09:21:18.391718] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:17.550 [2024-11-19 09:21:18.391764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134835 ] 00:19:17.550 [2024-11-19 09:21:18.468411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.550 [2024-11-19 09:21:18.511142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.808 [2024-11-19 09:21:18.664975] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.374 09:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.374 09:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:18.374 09:21:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.374 Running I/O for 10 seconds... 00:19:20.683 5365.00 IOPS, 20.96 MiB/s [2024-11-19T08:21:22.678Z] 5459.00 IOPS, 21.32 MiB/s [2024-11-19T08:21:23.614Z] 5460.00 IOPS, 21.33 MiB/s [2024-11-19T08:21:24.594Z] 5468.75 IOPS, 21.36 MiB/s [2024-11-19T08:21:25.536Z] 5482.40 IOPS, 21.42 MiB/s [2024-11-19T08:21:26.471Z] 5488.67 IOPS, 21.44 MiB/s [2024-11-19T08:21:27.405Z] 5490.29 IOPS, 21.45 MiB/s [2024-11-19T08:21:28.783Z] 5497.75 IOPS, 21.48 MiB/s [2024-11-19T08:21:29.721Z] 5459.11 IOPS, 21.32 MiB/s [2024-11-19T08:21:29.721Z] 5466.10 IOPS, 21.35 MiB/s 00:19:28.662 Latency(us) 00:19:28.662 [2024-11-19T08:21:29.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.662 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.662 Verification LBA range: start 0x0 length 0x2000 00:19:28.662 TLSTESTn1 : 10.02 5470.45 21.37 0.00 0.00 23363.27 6126.19 22567.18 00:19:28.662 [2024-11-19T08:21:29.721Z] =================================================================================================================== 00:19:28.662 [2024-11-19T08:21:29.721Z] Total : 5470.45 21.37 0.00 0.00 23363.27 6126.19 22567.18 00:19:28.662 { 00:19:28.662 "results": [ 00:19:28.662 { 00:19:28.663 "job": "TLSTESTn1", 00:19:28.663 "core_mask": "0x4", 00:19:28.663 "workload": "verify", 00:19:28.663 "status": "finished", 00:19:28.663 "verify_range": { 00:19:28.663 "start": 0, 00:19:28.663 "length": 8192 00:19:28.663 }, 00:19:28.663 "queue_depth": 128, 00:19:28.663 "io_size": 4096, 00:19:28.663 "runtime": 10.015083, 00:19:28.663 "iops": 5470.448921891111, 00:19:28.663 "mibps": 21.368941101137153, 00:19:28.663 "io_failed": 0, 00:19:28.663 "io_timeout": 0, 00:19:28.663 "avg_latency_us": 23363.270050511823, 00:19:28.663 "min_latency_us": 6126.191304347826, 00:19:28.663 "max_latency_us": 22567.179130434783 00:19:28.663 } 00:19:28.663 ], 00:19:28.663 "core_count": 1 00:19:28.663 } 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1134835 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1134835 ']' 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1134835 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1134835 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1134835' 00:19:28.663 killing process with pid 1134835 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1134835 00:19:28.663 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.663 00:19:28.663 Latency(us) 00:19:28.663 [2024-11-19T08:21:29.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.663 [2024-11-19T08:21:29.722Z] =================================================================================================================== 00:19:28.663 [2024-11-19T08:21:29.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1134835 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1134590 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1134590 ']' 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1134590 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1134590 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1134590' 00:19:28.663 killing process with pid 1134590 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1134590 00:19:28.663 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1134590 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1136677 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1136677 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1136677 ']' 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.923 09:21:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.923 [2024-11-19 09:21:29.895594] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:28.923 [2024-11-19 09:21:29.895643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.923 [2024-11-19 09:21:29.975712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.184 [2024-11-19 09:21:30.022060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.184 [2024-11-19 09:21:30.022095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.184 [2024-11-19 09:21:30.022103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.184 [2024-11-19 09:21:30.022110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.184 [2024-11-19 09:21:30.022115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.184 [2024-11-19 09:21:30.022524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.mCU9m0uVNT 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.mCU9m0uVNT 00:19:29.184 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.442 [2024-11-19 09:21:30.329995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.442 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.700 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.700 [2024-11-19 09:21:30.731026] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.700 [2024-11-19 09:21:30.731234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.959 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.959 malloc0 00:19:29.959 09:21:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.217 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:30.477 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1136938 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1136938 /var/tmp/bdevperf.sock 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1136938 ']' 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.737 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.737 [2024-11-19 09:21:31.584562] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:30.737 [2024-11-19 09:21:31.584610] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136938 ] 00:19:30.737 [2024-11-19 09:21:31.658815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.737 [2024-11-19 09:21:31.699889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.996 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:30.996 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:30.996 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:30.996 09:21:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:31.256 [2024-11-19 09:21:32.167691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.256 nvme0n1 00:19:31.256 09:21:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.515 Running I/O for 1 seconds... 00:19:32.454 5113.00 IOPS, 19.97 MiB/s 00:19:32.454 Latency(us) 00:19:32.454 [2024-11-19T08:21:33.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.454 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:32.454 Verification LBA range: start 0x0 length 0x2000 00:19:32.454 nvme0n1 : 1.01 5168.82 20.19 0.00 0.00 24596.19 5784.26 27126.21 00:19:32.454 [2024-11-19T08:21:33.513Z] =================================================================================================================== 00:19:32.454 [2024-11-19T08:21:33.513Z] Total : 5168.82 20.19 0.00 0.00 24596.19 5784.26 27126.21 00:19:32.454 { 00:19:32.454 "results": [ 00:19:32.454 { 00:19:32.454 "job": "nvme0n1", 00:19:32.454 "core_mask": "0x2", 00:19:32.454 "workload": "verify", 00:19:32.454 "status": "finished", 00:19:32.454 "verify_range": { 00:19:32.454 "start": 0, 00:19:32.454 "length": 8192 00:19:32.454 }, 00:19:32.454 "queue_depth": 128, 00:19:32.454 "io_size": 4096, 00:19:32.454 "runtime": 1.013964, 00:19:32.454 "iops": 5168.822561747755, 00:19:32.454 "mibps": 20.190713131827167, 00:19:32.454 "io_failed": 0, 00:19:32.454 "io_timeout": 0, 00:19:32.454 "avg_latency_us": 24596.194741461553, 00:19:32.454 "min_latency_us": 5784.264347826087, 00:19:32.454 "max_latency_us": 27126.205217391303 00:19:32.454 } 00:19:32.454 ], 00:19:32.454 "core_count": 1 00:19:32.454 } 00:19:32.454 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1136938 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1136938 ']' 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1136938 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1136938 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1136938' 00:19:32.455 killing process with pid 1136938 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1136938 00:19:32.455 Received shutdown signal, test time was about 1.000000 seconds 00:19:32.455 00:19:32.455 Latency(us) 00:19:32.455 [2024-11-19T08:21:33.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.455 [2024-11-19T08:21:33.514Z] =================================================================================================================== 00:19:32.455 [2024-11-19T08:21:33.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.455 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1136938 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1136677 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1136677 ']' 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1136677 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1136677 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1136677' 00:19:32.715 killing process with pid 1136677 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1136677 00:19:32.715 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1136677 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1137334 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1137334 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1137334 ']' 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.974 09:21:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.974 [2024-11-19 09:21:33.892735] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:32.974 [2024-11-19 09:21:33.892783] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.974 [2024-11-19 09:21:33.972056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.974 [2024-11-19 09:21:34.013104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.974 [2024-11-19 09:21:34.013141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.974 [2024-11-19 09:21:34.013149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.974 [2024-11-19 09:21:34.013155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.974 [2024-11-19 09:21:34.013160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.974 [2024-11-19 09:21:34.013710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.234 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.234 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:33.234 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.234 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.234 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.235 [2024-11-19 09:21:34.153365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.235 malloc0 00:19:33.235 [2024-11-19 09:21:34.181593] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.235 [2024-11-19 09:21:34.181782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1137422 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1137422 /var/tmp/bdevperf.sock 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1137422 ']' 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:33.235 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.235 [2024-11-19 09:21:34.254156] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:33.235 [2024-11-19 09:21:34.254197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137422 ] 00:19:33.495 [2024-11-19 09:21:34.328003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.495 [2024-11-19 09:21:34.368799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.495 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.495 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:33.495 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mCU9m0uVNT 00:19:33.753 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:34.014 [2024-11-19 09:21:34.849042] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.014 nvme0n1 00:19:34.014 09:21:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.014 Running I/O for 1 seconds... 00:19:35.392 5209.00 IOPS, 20.35 MiB/s 00:19:35.392 Latency(us) 00:19:35.392 [2024-11-19T08:21:36.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.392 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:35.392 Verification LBA range: start 0x0 length 0x2000 00:19:35.392 nvme0n1 : 1.02 5246.65 20.49 0.00 0.00 24218.50 7408.42 27126.21 00:19:35.392 [2024-11-19T08:21:36.451Z] =================================================================================================================== 00:19:35.392 [2024-11-19T08:21:36.451Z] Total : 5246.65 20.49 0.00 0.00 24218.50 7408.42 27126.21 00:19:35.392 { 00:19:35.392 "results": [ 00:19:35.392 { 00:19:35.392 "job": "nvme0n1", 00:19:35.392 "core_mask": "0x2", 00:19:35.392 "workload": "verify", 00:19:35.392 "status": "finished", 00:19:35.392 "verify_range": { 00:19:35.392 "start": 0, 00:19:35.392 "length": 8192 00:19:35.392 }, 00:19:35.392 "queue_depth": 128, 00:19:35.392 "io_size": 4096, 00:19:35.392 "runtime": 1.017221, 00:19:35.392 "iops": 5246.647483683487, 00:19:35.392 "mibps": 20.49471673313862, 00:19:35.392 "io_failed": 0, 00:19:35.392 "io_timeout": 0, 00:19:35.392 "avg_latency_us": 24218.496691350785, 00:19:35.392 "min_latency_us": 7408.417391304348, 00:19:35.392 "max_latency_us": 27126.205217391303 00:19:35.392 } 00:19:35.392 ], 00:19:35.392 "core_count": 1 00:19:35.392 } 00:19:35.392 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:35.392 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.392 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.392 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.392 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:35.392 "subsystems": [ 00:19:35.392 { 00:19:35.392 "subsystem": "keyring", 00:19:35.392 "config": [ 00:19:35.392 { 00:19:35.392 "method": "keyring_file_add_key", 00:19:35.392 "params": { 00:19:35.392 "name": "key0", 00:19:35.392 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:35.392 } 00:19:35.392 } 00:19:35.392 ] 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "subsystem": "iobuf", 00:19:35.392 "config": [ 00:19:35.392 { 00:19:35.392 "method": "iobuf_set_options", 00:19:35.392 "params": { 00:19:35.392 "small_pool_count": 8192, 00:19:35.392 "large_pool_count": 1024, 00:19:35.392 "small_bufsize": 8192, 00:19:35.392 "large_bufsize": 135168, 00:19:35.392 "enable_numa": false 00:19:35.392 } 00:19:35.392 } 00:19:35.392 ] 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "subsystem": "sock", 00:19:35.392 "config": [ 00:19:35.392 { 00:19:35.392 "method": "sock_set_default_impl", 00:19:35.392 "params": { 00:19:35.392 "impl_name": "posix" 00:19:35.392 } 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "method": "sock_impl_set_options", 00:19:35.392 "params": { 00:19:35.392 "impl_name": "ssl", 00:19:35.392 "recv_buf_size": 4096, 00:19:35.392 "send_buf_size": 4096, 00:19:35.392 "enable_recv_pipe": true, 00:19:35.392 "enable_quickack": false, 00:19:35.392 "enable_placement_id": 0, 00:19:35.392 "enable_zerocopy_send_server": true, 00:19:35.392 "enable_zerocopy_send_client": false, 00:19:35.392 "zerocopy_threshold": 0, 00:19:35.392 "tls_version": 0, 00:19:35.392 "enable_ktls": false 00:19:35.392 } 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "method": "sock_impl_set_options", 00:19:35.392 "params": { 00:19:35.392 "impl_name": "posix", 00:19:35.392 "recv_buf_size": 2097152, 00:19:35.392 "send_buf_size": 2097152, 00:19:35.392 "enable_recv_pipe": true, 00:19:35.392 "enable_quickack": false, 00:19:35.392 "enable_placement_id": 0, 00:19:35.392 "enable_zerocopy_send_server": true, 00:19:35.392 "enable_zerocopy_send_client": false, 00:19:35.392 "zerocopy_threshold": 0, 00:19:35.392 "tls_version": 0, 00:19:35.392 "enable_ktls": false 00:19:35.392 } 00:19:35.392 } 00:19:35.392 ] 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "subsystem": "vmd", 00:19:35.392 "config": [] 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "subsystem": "accel", 00:19:35.392 "config": [ 00:19:35.392 { 00:19:35.392 "method": "accel_set_options", 00:19:35.392 "params": { 00:19:35.392 "small_cache_size": 128, 00:19:35.392 "large_cache_size": 16, 00:19:35.392 "task_count": 2048, 00:19:35.392 "sequence_count": 2048, 00:19:35.392 "buf_count": 2048 00:19:35.392 } 00:19:35.392 } 00:19:35.392 ] 00:19:35.392 }, 00:19:35.392 { 00:19:35.392 "subsystem": "bdev", 00:19:35.393 "config": [ 00:19:35.393 { 00:19:35.393 "method": "bdev_set_options", 00:19:35.393 "params": { 00:19:35.393 "bdev_io_pool_size": 65535, 00:19:35.393 "bdev_io_cache_size": 256, 00:19:35.393 "bdev_auto_examine": true, 00:19:35.393 "iobuf_small_cache_size": 128, 00:19:35.393 "iobuf_large_cache_size": 16 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "bdev_raid_set_options", 00:19:35.393 "params": { 00:19:35.393 "process_window_size_kb": 1024, 00:19:35.393 "process_max_bandwidth_mb_sec": 0 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "bdev_iscsi_set_options", 00:19:35.393 "params": { 00:19:35.393 "timeout_sec": 30 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "bdev_nvme_set_options", 00:19:35.393 "params": { 00:19:35.393 "action_on_timeout": "none", 00:19:35.393 "timeout_us": 0, 00:19:35.393 "timeout_admin_us": 0, 00:19:35.393 "keep_alive_timeout_ms": 10000, 00:19:35.393 "arbitration_burst": 0, 00:19:35.393 "low_priority_weight": 0, 00:19:35.393 "medium_priority_weight": 0, 00:19:35.393 "high_priority_weight": 0, 00:19:35.393 "nvme_adminq_poll_period_us": 10000, 00:19:35.393 "nvme_ioq_poll_period_us": 0, 00:19:35.393 "io_queue_requests": 0, 00:19:35.393 "delay_cmd_submit": true, 00:19:35.393 "transport_retry_count": 4, 00:19:35.393 "bdev_retry_count": 3, 00:19:35.393 "transport_ack_timeout": 0, 00:19:35.393 "ctrlr_loss_timeout_sec": 0, 00:19:35.393 "reconnect_delay_sec": 0, 00:19:35.393 "fast_io_fail_timeout_sec": 0, 00:19:35.393 "disable_auto_failback": false, 00:19:35.393 "generate_uuids": false, 00:19:35.393 "transport_tos": 0, 00:19:35.393 "nvme_error_stat": false, 00:19:35.393 "rdma_srq_size": 0, 00:19:35.393 "io_path_stat": false, 00:19:35.393 "allow_accel_sequence": false, 00:19:35.393 "rdma_max_cq_size": 0, 00:19:35.393 "rdma_cm_event_timeout_ms": 0, 00:19:35.393 "dhchap_digests": [ 00:19:35.393 "sha256", 00:19:35.393 "sha384", 00:19:35.393 "sha512" 00:19:35.393 ], 00:19:35.393 "dhchap_dhgroups": [ 00:19:35.393 "null", 00:19:35.393 "ffdhe2048", 00:19:35.393 "ffdhe3072", 00:19:35.393 "ffdhe4096", 00:19:35.393 "ffdhe6144", 00:19:35.393 "ffdhe8192" 00:19:35.393 ] 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "bdev_nvme_set_hotplug", 00:19:35.393 "params": { 00:19:35.393 "period_us": 100000, 00:19:35.393 "enable": false 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "bdev_malloc_create", 00:19:35.393 "params": { 00:19:35.393 "name": "malloc0", 00:19:35.393 "num_blocks": 8192, 00:19:35.393 "block_size": 4096, 00:19:35.393 "physical_block_size": 4096, 00:19:35.393 "uuid": "e81c568e-444e-403a-b9cb-b0f3c3d1344a", 00:19:35.393 "optimal_io_boundary": 0, 00:19:35.393 "md_size": 0, 00:19:35.393 "dif_type": 0, 00:19:35.393 "dif_is_head_of_md": false, 00:19:35.393 "dif_pi_format": 0 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "bdev_wait_for_examine" 00:19:35.393 } 00:19:35.393 ] 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "subsystem": "nbd", 00:19:35.393 "config": [] 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "subsystem": "scheduler", 00:19:35.393 "config": [ 00:19:35.393 { 00:19:35.393 "method": "framework_set_scheduler", 00:19:35.393 "params": { 00:19:35.393 "name": "static" 00:19:35.393 } 00:19:35.393 } 00:19:35.393 ] 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "subsystem": "nvmf", 00:19:35.393 "config": [ 00:19:35.393 { 00:19:35.393 "method": "nvmf_set_config", 00:19:35.393 "params": { 00:19:35.393 "discovery_filter": "match_any", 00:19:35.393 "admin_cmd_passthru": { 00:19:35.393 "identify_ctrlr": false 00:19:35.393 }, 00:19:35.393 "dhchap_digests": [ 00:19:35.393 "sha256", 00:19:35.393 "sha384", 00:19:35.393 "sha512" 00:19:35.393 ], 00:19:35.393 "dhchap_dhgroups": [ 00:19:35.393 "null", 00:19:35.393 "ffdhe2048", 00:19:35.393 "ffdhe3072", 00:19:35.393 "ffdhe4096", 00:19:35.393 "ffdhe6144", 00:19:35.393 "ffdhe8192" 00:19:35.393 ] 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_set_max_subsystems", 00:19:35.393 "params": { 00:19:35.393 "max_subsystems": 1024 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_set_crdt", 00:19:35.393 "params": { 00:19:35.393 "crdt1": 0, 00:19:35.393 "crdt2": 0, 00:19:35.393 "crdt3": 0 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_create_transport", 00:19:35.393 "params": { 00:19:35.393 "trtype": "TCP", 00:19:35.393 "max_queue_depth": 128, 00:19:35.393 "max_io_qpairs_per_ctrlr": 127, 00:19:35.393 "in_capsule_data_size": 4096, 00:19:35.393 "max_io_size": 131072, 00:19:35.393 "io_unit_size": 131072, 00:19:35.393 "max_aq_depth": 128, 00:19:35.393 "num_shared_buffers": 511, 00:19:35.393 "buf_cache_size": 4294967295, 00:19:35.393 "dif_insert_or_strip": false, 00:19:35.393 "zcopy": false, 00:19:35.393 "c2h_success": false, 00:19:35.393 "sock_priority": 0, 00:19:35.393 "abort_timeout_sec": 1, 00:19:35.393 "ack_timeout": 0, 00:19:35.393 "data_wr_pool_size": 0 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_create_subsystem", 00:19:35.393 "params": { 00:19:35.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.393 "allow_any_host": false, 00:19:35.393 "serial_number": "00000000000000000000", 00:19:35.393 "model_number": "SPDK bdev Controller", 00:19:35.393 "max_namespaces": 32, 00:19:35.393 "min_cntlid": 1, 00:19:35.393 "max_cntlid": 65519, 00:19:35.393 "ana_reporting": false 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_subsystem_add_host", 00:19:35.393 "params": { 00:19:35.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.393 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.393 "psk": "key0" 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_subsystem_add_ns", 00:19:35.393 "params": { 00:19:35.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.393 "namespace": { 00:19:35.393 "nsid": 1, 00:19:35.393 "bdev_name": "malloc0", 00:19:35.393 "nguid": "E81C568E444E403AB9CBB0F3C3D1344A", 00:19:35.393 "uuid": "e81c568e-444e-403a-b9cb-b0f3c3d1344a", 00:19:35.393 "no_auto_visible": false 00:19:35.393 } 00:19:35.393 } 00:19:35.393 }, 00:19:35.393 { 00:19:35.393 "method": "nvmf_subsystem_add_listener", 00:19:35.393 "params": { 00:19:35.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.393 "listen_address": { 00:19:35.393 "trtype": "TCP", 00:19:35.393 "adrfam": "IPv4", 00:19:35.393 "traddr": "10.0.0.2", 00:19:35.393 "trsvcid": "4420" 00:19:35.393 }, 00:19:35.393 "secure_channel": false, 00:19:35.393 "sock_impl": "ssl" 00:19:35.393 } 00:19:35.393 } 00:19:35.393 ] 00:19:35.393 } 00:19:35.393 ] 00:19:35.393 }' 00:19:35.393 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:35.652 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:35.652 "subsystems": [ 00:19:35.652 { 00:19:35.652 "subsystem": "keyring", 00:19:35.652 "config": [ 00:19:35.652 { 00:19:35.652 "method": "keyring_file_add_key", 00:19:35.652 "params": { 00:19:35.652 "name": "key0", 00:19:35.652 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:35.652 } 00:19:35.652 } 00:19:35.652 ] 00:19:35.652 }, 00:19:35.652 { 00:19:35.652 "subsystem": "iobuf", 00:19:35.652 "config": [ 00:19:35.652 { 00:19:35.652 "method": "iobuf_set_options", 00:19:35.652 "params": { 00:19:35.652 "small_pool_count": 8192, 00:19:35.652 "large_pool_count": 1024, 00:19:35.652 "small_bufsize": 8192, 00:19:35.652 "large_bufsize": 135168, 00:19:35.652 "enable_numa": false 00:19:35.652 } 00:19:35.652 } 00:19:35.652 ] 00:19:35.652 }, 00:19:35.652 { 00:19:35.652 "subsystem": "sock", 00:19:35.652 "config": [ 00:19:35.652 { 00:19:35.652 "method": "sock_set_default_impl", 00:19:35.652 "params": { 00:19:35.652 "impl_name": "posix" 00:19:35.652 } 00:19:35.652 }, 00:19:35.652 { 00:19:35.652 "method": "sock_impl_set_options", 00:19:35.652 "params": { 00:19:35.652 "impl_name": "ssl", 00:19:35.652 "recv_buf_size": 4096, 00:19:35.652 "send_buf_size": 4096, 00:19:35.652 "enable_recv_pipe": true, 00:19:35.652 "enable_quickack": false, 00:19:35.652 "enable_placement_id": 0, 00:19:35.652 "enable_zerocopy_send_server": true, 00:19:35.652 "enable_zerocopy_send_client": false, 00:19:35.652 "zerocopy_threshold": 0, 00:19:35.652 "tls_version": 0, 00:19:35.652 "enable_ktls": false 00:19:35.652 } 00:19:35.652 }, 00:19:35.652 { 00:19:35.652 "method": "sock_impl_set_options", 00:19:35.652 "params": { 00:19:35.653 "impl_name": "posix", 00:19:35.653 "recv_buf_size": 2097152, 00:19:35.653 "send_buf_size": 2097152, 00:19:35.653 "enable_recv_pipe": true, 00:19:35.653 "enable_quickack": false, 00:19:35.653 "enable_placement_id": 0, 00:19:35.653 "enable_zerocopy_send_server": true, 00:19:35.653 "enable_zerocopy_send_client": false, 00:19:35.653 "zerocopy_threshold": 0, 00:19:35.653 "tls_version": 0, 00:19:35.653 "enable_ktls": false 00:19:35.653 } 00:19:35.653 } 00:19:35.653 ] 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "subsystem": "vmd", 00:19:35.653 "config": [] 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "subsystem": "accel", 00:19:35.653 "config": [ 00:19:35.653 { 00:19:35.653 "method": "accel_set_options", 00:19:35.653 "params": { 00:19:35.653 "small_cache_size": 128, 00:19:35.653 "large_cache_size": 16, 00:19:35.653 "task_count": 2048, 00:19:35.653 "sequence_count": 2048, 00:19:35.653 "buf_count": 2048 00:19:35.653 } 00:19:35.653 } 00:19:35.653 ] 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "subsystem": "bdev", 00:19:35.653 "config": [ 00:19:35.653 { 00:19:35.653 "method": "bdev_set_options", 00:19:35.653 "params": { 00:19:35.653 "bdev_io_pool_size": 65535, 00:19:35.653 "bdev_io_cache_size": 256, 00:19:35.653 "bdev_auto_examine": true, 00:19:35.653 "iobuf_small_cache_size": 128, 00:19:35.653 "iobuf_large_cache_size": 16 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_raid_set_options", 00:19:35.653 "params": { 00:19:35.653 "process_window_size_kb": 1024, 00:19:35.653 "process_max_bandwidth_mb_sec": 0 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_iscsi_set_options", 00:19:35.653 "params": { 00:19:35.653 "timeout_sec": 30 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_nvme_set_options", 00:19:35.653 "params": { 00:19:35.653 "action_on_timeout": "none", 00:19:35.653 "timeout_us": 0, 00:19:35.653 "timeout_admin_us": 0, 00:19:35.653 "keep_alive_timeout_ms": 10000, 00:19:35.653 "arbitration_burst": 0, 00:19:35.653 "low_priority_weight": 0, 00:19:35.653 "medium_priority_weight": 0, 00:19:35.653 "high_priority_weight": 0, 00:19:35.653 "nvme_adminq_poll_period_us": 10000, 00:19:35.653 "nvme_ioq_poll_period_us": 0, 00:19:35.653 "io_queue_requests": 512, 00:19:35.653 "delay_cmd_submit": true, 00:19:35.653 "transport_retry_count": 4, 00:19:35.653 "bdev_retry_count": 3, 00:19:35.653 "transport_ack_timeout": 0, 00:19:35.653 "ctrlr_loss_timeout_sec": 0, 00:19:35.653 "reconnect_delay_sec": 0, 00:19:35.653 "fast_io_fail_timeout_sec": 0, 00:19:35.653 "disable_auto_failback": false, 00:19:35.653 "generate_uuids": false, 00:19:35.653 "transport_tos": 0, 00:19:35.653 "nvme_error_stat": false, 00:19:35.653 "rdma_srq_size": 0, 00:19:35.653 "io_path_stat": false, 00:19:35.653 "allow_accel_sequence": false, 00:19:35.653 "rdma_max_cq_size": 0, 00:19:35.653 "rdma_cm_event_timeout_ms": 0, 00:19:35.653 "dhchap_digests": [ 00:19:35.653 "sha256", 00:19:35.653 "sha384", 00:19:35.653 "sha512" 00:19:35.653 ], 00:19:35.653 "dhchap_dhgroups": [ 00:19:35.653 "null", 00:19:35.653 "ffdhe2048", 00:19:35.653 "ffdhe3072", 00:19:35.653 "ffdhe4096", 00:19:35.653 "ffdhe6144", 00:19:35.653 "ffdhe8192" 00:19:35.653 ] 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_nvme_attach_controller", 00:19:35.653 "params": { 00:19:35.653 "name": "nvme0", 00:19:35.653 "trtype": "TCP", 00:19:35.653 "adrfam": "IPv4", 00:19:35.653 "traddr": "10.0.0.2", 00:19:35.653 "trsvcid": "4420", 00:19:35.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.653 "prchk_reftag": false, 00:19:35.653 "prchk_guard": false, 00:19:35.653 "ctrlr_loss_timeout_sec": 0, 00:19:35.653 "reconnect_delay_sec": 0, 00:19:35.653 "fast_io_fail_timeout_sec": 0, 00:19:35.653 "psk": "key0", 00:19:35.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.653 "hdgst": false, 00:19:35.653 "ddgst": false, 00:19:35.653 "multipath": "multipath" 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_nvme_set_hotplug", 00:19:35.653 "params": { 00:19:35.653 "period_us": 100000, 00:19:35.653 "enable": false 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_enable_histogram", 00:19:35.653 "params": { 00:19:35.653 "name": "nvme0n1", 00:19:35.653 "enable": true 00:19:35.653 } 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "method": "bdev_wait_for_examine" 00:19:35.653 } 00:19:35.653 ] 00:19:35.653 }, 00:19:35.653 { 00:19:35.653 "subsystem": "nbd", 00:19:35.653 "config": [] 00:19:35.653 } 00:19:35.653 ] 00:19:35.653 }' 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1137422 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1137422 ']' 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1137422 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1137422 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1137422' 00:19:35.653 killing process with pid 1137422 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1137422 00:19:35.653 Received shutdown signal, test time was about 1.000000 seconds 00:19:35.653 00:19:35.653 Latency(us) 00:19:35.653 [2024-11-19T08:21:36.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.653 [2024-11-19T08:21:36.712Z] =================================================================================================================== 00:19:35.653 [2024-11-19T08:21:36.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1137422 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1137334 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1137334 ']' 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1137334 00:19:35.653 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:35.654 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:35.654 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1137334 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1137334' 00:19:35.913 killing process with pid 1137334 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1137334 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1137334 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.913 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:35.913 "subsystems": [ 00:19:35.913 { 00:19:35.913 "subsystem": "keyring", 00:19:35.913 "config": [ 00:19:35.913 { 00:19:35.913 "method": "keyring_file_add_key", 00:19:35.913 "params": { 00:19:35.913 "name": "key0", 00:19:35.913 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:35.913 } 00:19:35.913 } 00:19:35.913 ] 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "subsystem": "iobuf", 00:19:35.913 "config": [ 00:19:35.913 { 00:19:35.913 "method": "iobuf_set_options", 00:19:35.913 "params": { 00:19:35.913 "small_pool_count": 8192, 00:19:35.913 "large_pool_count": 1024, 00:19:35.913 "small_bufsize": 8192, 00:19:35.913 "large_bufsize": 135168, 00:19:35.913 "enable_numa": false 00:19:35.913 } 00:19:35.913 } 00:19:35.913 ] 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "subsystem": "sock", 00:19:35.913 "config": [ 00:19:35.913 { 00:19:35.913 "method": "sock_set_default_impl", 00:19:35.913 "params": { 00:19:35.913 "impl_name": "posix" 00:19:35.913 } 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "method": "sock_impl_set_options", 00:19:35.913 "params": { 00:19:35.913 "impl_name": "ssl", 00:19:35.913 "recv_buf_size": 4096, 00:19:35.913 "send_buf_size": 4096, 00:19:35.913 "enable_recv_pipe": true, 00:19:35.913 "enable_quickack": false, 00:19:35.913 "enable_placement_id": 0, 00:19:35.913 "enable_zerocopy_send_server": true, 00:19:35.913 "enable_zerocopy_send_client": false, 00:19:35.913 "zerocopy_threshold": 0, 00:19:35.913 "tls_version": 0, 00:19:35.913 "enable_ktls": false 00:19:35.913 } 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "method": "sock_impl_set_options", 00:19:35.913 "params": { 00:19:35.913 "impl_name": "posix", 00:19:35.913 "recv_buf_size": 2097152, 00:19:35.913 "send_buf_size": 2097152, 00:19:35.913 "enable_recv_pipe": true, 00:19:35.913 "enable_quickack": false, 00:19:35.913 "enable_placement_id": 0, 00:19:35.913 "enable_zerocopy_send_server": true, 00:19:35.913 "enable_zerocopy_send_client": false, 00:19:35.913 "zerocopy_threshold": 0, 00:19:35.913 "tls_version": 0, 00:19:35.913 "enable_ktls": false 00:19:35.913 } 00:19:35.913 } 00:19:35.913 ] 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "subsystem": "vmd", 00:19:35.913 "config": [] 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "subsystem": "accel", 00:19:35.913 "config": [ 00:19:35.913 { 00:19:35.913 "method": "accel_set_options", 00:19:35.913 "params": { 00:19:35.913 "small_cache_size": 128, 00:19:35.913 "large_cache_size": 16, 00:19:35.913 "task_count": 2048, 00:19:35.913 "sequence_count": 2048, 00:19:35.913 "buf_count": 2048 00:19:35.913 } 00:19:35.913 } 00:19:35.913 ] 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "subsystem": "bdev", 00:19:35.913 "config": [ 00:19:35.913 { 00:19:35.913 "method": "bdev_set_options", 00:19:35.913 "params": { 00:19:35.913 "bdev_io_pool_size": 65535, 00:19:35.913 "bdev_io_cache_size": 256, 00:19:35.913 "bdev_auto_examine": true, 00:19:35.913 "iobuf_small_cache_size": 128, 00:19:35.913 "iobuf_large_cache_size": 16 00:19:35.913 } 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "method": "bdev_raid_set_options", 00:19:35.913 "params": { 00:19:35.913 "process_window_size_kb": 1024, 00:19:35.913 "process_max_bandwidth_mb_sec": 0 00:19:35.913 } 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "method": "bdev_iscsi_set_options", 00:19:35.913 "params": { 00:19:35.913 "timeout_sec": 30 00:19:35.913 } 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "method": "bdev_nvme_set_options", 00:19:35.913 "params": { 00:19:35.913 "action_on_timeout": "none", 00:19:35.913 "timeout_us": 0, 00:19:35.913 "timeout_admin_us": 0, 00:19:35.913 "keep_alive_timeout_ms": 10000, 00:19:35.913 "arbitration_burst": 0, 00:19:35.913 "low_priority_weight": 0, 00:19:35.913 "medium_priority_weight": 0, 00:19:35.913 "high_priority_weight": 0, 00:19:35.913 "nvme_adminq_poll_period_us": 10000, 00:19:35.913 "nvme_ioq_poll_period_us": 0, 00:19:35.913 "io_queue_requests": 0, 00:19:35.913 "delay_cmd_submit": true, 00:19:35.913 "transport_retry_count": 4, 00:19:35.913 "bdev_retry_count": 3, 00:19:35.913 "transport_ack_timeout": 0, 00:19:35.913 "ctrlr_loss_timeout_sec": 0, 00:19:35.913 "reconnect_delay_sec": 0, 00:19:35.913 "fast_io_fail_timeout_sec": 0, 00:19:35.913 "disable_auto_failback": false, 00:19:35.913 "generate_uuids": false, 00:19:35.913 "transport_tos": 0, 00:19:35.913 "nvme_error_stat": false, 00:19:35.913 "rdma_srq_size": 0, 00:19:35.913 "io_path_stat": false, 00:19:35.913 "allow_accel_sequence": false, 00:19:35.913 "rdma_max_cq_size": 0, 00:19:35.913 "rdma_cm_event_timeout_ms": 0, 00:19:35.913 "dhchap_digests": [ 00:19:35.913 "sha256", 00:19:35.913 "sha384", 00:19:35.913 "sha512" 00:19:35.913 ], 00:19:35.913 "dhchap_dhgroups": [ 00:19:35.913 "null", 00:19:35.913 "ffdhe2048", 00:19:35.913 "ffdhe3072", 00:19:35.913 "ffdhe4096", 00:19:35.913 "ffdhe6144", 00:19:35.913 "ffdhe8192" 00:19:35.913 ] 00:19:35.913 } 00:19:35.913 }, 00:19:35.913 { 00:19:35.913 "method": "bdev_nvme_set_hotplug", 00:19:35.913 "params": { 00:19:35.913 "period_us": 100000, 00:19:35.914 "enable": false 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "bdev_malloc_create", 00:19:35.914 "params": { 00:19:35.914 "name": "malloc0", 00:19:35.914 "num_blocks": 8192, 00:19:35.914 "block_size": 4096, 00:19:35.914 "physical_block_size": 4096, 00:19:35.914 "uuid": "e81c568e-444e-403a-b9cb-b0f3c3d1344a", 00:19:35.914 "optimal_io_boundary": 0, 00:19:35.914 "md_size": 0, 00:19:35.914 "dif_type": 0, 00:19:35.914 "dif_is_head_of_md": false, 00:19:35.914 "dif_pi_format": 0 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "bdev_wait_for_examine" 00:19:35.914 } 00:19:35.914 ] 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "subsystem": "nbd", 00:19:35.914 "config": [] 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "subsystem": "scheduler", 00:19:35.914 "config": [ 00:19:35.914 { 00:19:35.914 "method": "framework_set_scheduler", 00:19:35.914 "params": { 00:19:35.914 "name": "static" 00:19:35.914 } 00:19:35.914 } 00:19:35.914 ] 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "subsystem": "nvmf", 00:19:35.914 "config": [ 00:19:35.914 { 00:19:35.914 "method": "nvmf_set_config", 00:19:35.914 "params": { 00:19:35.914 "discovery_filter": "match_any", 00:19:35.914 "admin_cmd_passthru": { 00:19:35.914 "identify_ctrlr": false 00:19:35.914 }, 00:19:35.914 "dhchap_digests": [ 00:19:35.914 "sha256", 00:19:35.914 "sha384", 00:19:35.914 "sha512" 00:19:35.914 ], 00:19:35.914 "dhchap_dhgroups": [ 00:19:35.914 "null", 00:19:35.914 "ffdhe2048", 00:19:35.914 "ffdhe3072", 00:19:35.914 "ffdhe4096", 00:19:35.914 "ffdhe6144", 00:19:35.914 "ffdhe8192" 00:19:35.914 ] 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_set_max_subsystems", 00:19:35.914 "params": { 00:19:35.914 "max_subsystems": 1024 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_set_crdt", 00:19:35.914 "params": { 00:19:35.914 "crdt1": 0, 00:19:35.914 "crdt2": 0, 00:19:35.914 "crdt3": 0 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_create_transport", 00:19:35.914 "params": { 00:19:35.914 "trtype": "TCP", 00:19:35.914 "max_queue_depth": 128, 00:19:35.914 "max_io_qpairs_per_ctrlr": 127, 00:19:35.914 "in_capsule_data_size": 4096, 00:19:35.914 "max_io_size": 131072, 00:19:35.914 "io_unit_size": 131072, 00:19:35.914 "max_aq_depth": 128, 00:19:35.914 "num_shared_buffers": 511, 00:19:35.914 "buf_cache_size": 4294967295, 00:19:35.914 "dif_insert_or_strip": false, 00:19:35.914 "zcopy": false, 00:19:35.914 "c2h_success": false, 00:19:35.914 "sock_priority": 0, 00:19:35.914 "abort_timeout_sec": 1, 00:19:35.914 "ack_timeout": 0, 00:19:35.914 "data_wr_pool_size": 0 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_create_subsystem", 00:19:35.914 "params": { 00:19:35.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.914 "allow_any_host": false, 00:19:35.914 "serial_number": "00000000000000000000", 00:19:35.914 "model_number": "SPDK bdev Controller", 00:19:35.914 "max_namespaces": 32, 00:19:35.914 "min_cntlid": 1, 00:19:35.914 "max_cntlid": 65519, 00:19:35.914 "ana_reporting": false 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_subsystem_add_host", 00:19:35.914 "params": { 00:19:35.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.914 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.914 "psk": "key0" 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_subsystem_add_ns", 00:19:35.914 "params": { 00:19:35.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.914 "namespace": { 00:19:35.914 "nsid": 1, 00:19:35.914 "bdev_name": "malloc0", 00:19:35.914 "nguid": "E81C568E444E403AB9CBB0F3C3D1344A", 00:19:35.914 "uuid": "e81c568e-444e-403a-b9cb-b0f3c3d1344a", 00:19:35.914 "no_auto_visible": false 00:19:35.914 } 00:19:35.914 } 00:19:35.914 }, 00:19:35.914 { 00:19:35.914 "method": "nvmf_subsystem_add_listener", 00:19:35.914 "params": { 00:19:35.914 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.914 "listen_address": { 00:19:35.914 "trtype": "TCP", 00:19:35.914 "adrfam": "IPv4", 00:19:35.914 "traddr": "10.0.0.2", 00:19:35.914 "trsvcid": "4420" 00:19:35.914 }, 00:19:35.914 "secure_channel": false, 00:19:35.914 "sock_impl": "ssl" 00:19:35.914 } 00:19:35.914 } 00:19:35.914 ] 00:19:35.914 } 00:19:35.914 ] 00:19:35.914 }' 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1137899 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1137899 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1137899 ']' 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.914 09:21:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.914 [2024-11-19 09:21:36.938527] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:35.914 [2024-11-19 09:21:36.938573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.173 [2024-11-19 09:21:37.016635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.173 [2024-11-19 09:21:37.057216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.173 [2024-11-19 09:21:37.057252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.173 [2024-11-19 09:21:37.057260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.173 [2024-11-19 09:21:37.057266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.173 [2024-11-19 09:21:37.057271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.173 [2024-11-19 09:21:37.057846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.432 [2024-11-19 09:21:37.271064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.432 [2024-11-19 09:21:37.303105] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.432 [2024-11-19 09:21:37.303309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.999 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.999 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1137936 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1137936 /var/tmp/bdevperf.sock 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1137936 ']' 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:37.000 09:21:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:37.000 "subsystems": [ 00:19:37.000 { 00:19:37.000 "subsystem": "keyring", 00:19:37.000 "config": [ 00:19:37.000 { 00:19:37.000 "method": "keyring_file_add_key", 00:19:37.000 "params": { 00:19:37.000 "name": "key0", 00:19:37.000 "path": "/tmp/tmp.mCU9m0uVNT" 00:19:37.000 } 00:19:37.000 } 00:19:37.000 ] 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "subsystem": "iobuf", 00:19:37.000 "config": [ 00:19:37.000 { 00:19:37.000 "method": "iobuf_set_options", 00:19:37.000 "params": { 00:19:37.000 "small_pool_count": 8192, 00:19:37.000 "large_pool_count": 1024, 00:19:37.000 "small_bufsize": 8192, 00:19:37.000 "large_bufsize": 135168, 00:19:37.000 "enable_numa": false 00:19:37.000 } 00:19:37.000 } 00:19:37.000 ] 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "subsystem": "sock", 00:19:37.000 "config": [ 00:19:37.000 { 00:19:37.000 "method": "sock_set_default_impl", 00:19:37.000 "params": { 00:19:37.000 "impl_name": "posix" 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "sock_impl_set_options", 00:19:37.000 "params": { 00:19:37.000 "impl_name": "ssl", 00:19:37.000 "recv_buf_size": 4096, 00:19:37.000 "send_buf_size": 4096, 00:19:37.000 "enable_recv_pipe": true, 00:19:37.000 "enable_quickack": false, 00:19:37.000 "enable_placement_id": 0, 00:19:37.000 "enable_zerocopy_send_server": true, 00:19:37.000 "enable_zerocopy_send_client": false, 00:19:37.000 "zerocopy_threshold": 0, 00:19:37.000 "tls_version": 0, 00:19:37.000 "enable_ktls": false 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "sock_impl_set_options", 00:19:37.000 "params": { 00:19:37.000 "impl_name": "posix", 00:19:37.000 "recv_buf_size": 2097152, 00:19:37.000 "send_buf_size": 2097152, 00:19:37.000 "enable_recv_pipe": true, 00:19:37.000 "enable_quickack": false, 00:19:37.000 "enable_placement_id": 0, 00:19:37.000 "enable_zerocopy_send_server": true, 00:19:37.000 "enable_zerocopy_send_client": false, 00:19:37.000 "zerocopy_threshold": 0, 00:19:37.000 "tls_version": 0, 00:19:37.000 "enable_ktls": false 00:19:37.000 } 00:19:37.000 } 00:19:37.000 ] 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "subsystem": "vmd", 00:19:37.000 "config": [] 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "subsystem": "accel", 00:19:37.000 "config": [ 00:19:37.000 { 00:19:37.000 "method": "accel_set_options", 00:19:37.000 "params": { 00:19:37.000 "small_cache_size": 128, 00:19:37.000 "large_cache_size": 16, 00:19:37.000 "task_count": 2048, 00:19:37.000 "sequence_count": 2048, 00:19:37.000 "buf_count": 2048 00:19:37.000 } 00:19:37.000 } 00:19:37.000 ] 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "subsystem": "bdev", 00:19:37.000 "config": [ 00:19:37.000 { 00:19:37.000 "method": "bdev_set_options", 00:19:37.000 "params": { 00:19:37.000 "bdev_io_pool_size": 65535, 00:19:37.000 "bdev_io_cache_size": 256, 00:19:37.000 "bdev_auto_examine": true, 00:19:37.000 "iobuf_small_cache_size": 128, 00:19:37.000 "iobuf_large_cache_size": 16 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "bdev_raid_set_options", 00:19:37.000 "params": { 00:19:37.000 "process_window_size_kb": 1024, 00:19:37.000 "process_max_bandwidth_mb_sec": 0 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "bdev_iscsi_set_options", 00:19:37.000 "params": { 00:19:37.000 "timeout_sec": 30 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "bdev_nvme_set_options", 00:19:37.000 "params": { 00:19:37.000 "action_on_timeout": "none", 00:19:37.000 "timeout_us": 0, 00:19:37.000 "timeout_admin_us": 0, 00:19:37.000 "keep_alive_timeout_ms": 10000, 00:19:37.000 "arbitration_burst": 0, 00:19:37.000 "low_priority_weight": 0, 00:19:37.000 "medium_priority_weight": 0, 00:19:37.000 "high_priority_weight": 0, 00:19:37.000 "nvme_adminq_poll_period_us": 10000, 00:19:37.000 "nvme_ioq_poll_period_us": 0, 00:19:37.000 "io_queue_requests": 512, 00:19:37.000 "delay_cmd_submit": true, 00:19:37.000 "transport_retry_count": 4, 00:19:37.000 "bdev_retry_count": 3, 00:19:37.000 "transport_ack_timeout": 0, 00:19:37.000 "ctrlr_loss_timeout_sec": 0, 00:19:37.000 "reconnect_delay_sec": 0, 00:19:37.000 "fast_io_fail_timeout_sec": 0, 00:19:37.000 "disable_auto_failback": false, 00:19:37.000 "generate_uuids": false, 00:19:37.000 "transport_tos": 0, 00:19:37.000 "nvme_error_stat": false, 00:19:37.000 "rdma_srq_size": 0, 00:19:37.000 "io_path_stat": false, 00:19:37.000 "allow_accel_sequence": false, 00:19:37.000 "rdma_max_cq_size": 0, 00:19:37.000 "rdma_cm_event_timeout_ms": 0, 00:19:37.000 "dhchap_digests": [ 00:19:37.000 "sha256", 00:19:37.000 "sha384", 00:19:37.000 "sha512" 00:19:37.000 ], 00:19:37.000 "dhchap_dhgroups": [ 00:19:37.000 "null", 00:19:37.000 "ffdhe2048", 00:19:37.000 "ffdhe3072", 00:19:37.000 "ffdhe4096", 00:19:37.000 "ffdhe6144", 00:19:37.000 "ffdhe8192" 00:19:37.000 ] 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "bdev_nvme_attach_controller", 00:19:37.000 "params": { 00:19:37.000 "name": "nvme0", 00:19:37.000 "trtype": "TCP", 00:19:37.000 "adrfam": "IPv4", 00:19:37.000 "traddr": "10.0.0.2", 00:19:37.000 "trsvcid": "4420", 00:19:37.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.000 "prchk_reftag": false, 00:19:37.000 "prchk_guard": false, 00:19:37.000 "ctrlr_loss_timeout_sec": 0, 00:19:37.000 "reconnect_delay_sec": 0, 00:19:37.000 "fast_io_fail_timeout_sec": 0, 00:19:37.000 "psk": "key0", 00:19:37.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.000 "hdgst": false, 00:19:37.000 "ddgst": false, 00:19:37.000 "multipath": "multipath" 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "bdev_nvme_set_hotplug", 00:19:37.000 "params": { 00:19:37.000 "period_us": 100000, 00:19:37.000 "enable": false 00:19:37.000 } 00:19:37.000 }, 00:19:37.000 { 00:19:37.000 "method": "bdev_enable_histogram", 00:19:37.000 "params": { 00:19:37.000 "name": "nvme0n1", 00:19:37.001 "enable": true 00:19:37.001 } 00:19:37.001 }, 00:19:37.001 { 00:19:37.001 "method": "bdev_wait_for_examine" 00:19:37.001 } 00:19:37.001 ] 00:19:37.001 }, 00:19:37.001 { 00:19:37.001 "subsystem": "nbd", 00:19:37.001 "config": [] 00:19:37.001 } 00:19:37.001 ] 00:19:37.001 }' 00:19:37.001 [2024-11-19 09:21:37.855278] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:37.001 [2024-11-19 09:21:37.855327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137936 ] 00:19:37.001 [2024-11-19 09:21:37.931891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.001 [2024-11-19 09:21:37.972302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.259 [2024-11-19 09:21:38.126413] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.826 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.826 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:37.826 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.826 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:38.085 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.085 09:21:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.085 Running I/O for 1 seconds... 00:19:39.018 5346.00 IOPS, 20.88 MiB/s 00:19:39.018 Latency(us) 00:19:39.018 [2024-11-19T08:21:40.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.018 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:39.018 Verification LBA range: start 0x0 length 0x2000 00:19:39.018 nvme0n1 : 1.02 5388.36 21.05 0.00 0.00 23590.88 5841.25 22111.28 00:19:39.018 [2024-11-19T08:21:40.077Z] =================================================================================================================== 00:19:39.018 [2024-11-19T08:21:40.077Z] Total : 5388.36 21.05 0.00 0.00 23590.88 5841.25 22111.28 00:19:39.018 { 00:19:39.018 "results": [ 00:19:39.018 { 00:19:39.018 "job": "nvme0n1", 00:19:39.018 "core_mask": "0x2", 00:19:39.018 "workload": "verify", 00:19:39.018 "status": "finished", 00:19:39.018 "verify_range": { 00:19:39.018 "start": 0, 00:19:39.018 "length": 8192 00:19:39.018 }, 00:19:39.018 "queue_depth": 128, 00:19:39.018 "io_size": 4096, 00:19:39.018 "runtime": 1.015894, 00:19:39.018 "iops": 5388.357446741491, 00:19:39.018 "mibps": 21.04827127633395, 00:19:39.018 "io_failed": 0, 00:19:39.018 "io_timeout": 0, 00:19:39.018 "avg_latency_us": 23590.877872631092, 00:19:39.018 "min_latency_us": 5841.252173913043, 00:19:39.018 "max_latency_us": 22111.27652173913 00:19:39.018 } 00:19:39.018 ], 00:19:39.018 "core_count": 1 00:19:39.018 } 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:39.018 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:39.018 nvmf_trace.0 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1137936 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1137936 ']' 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1137936 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1137936 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1137936' 00:19:39.277 killing process with pid 1137936 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1137936 00:19:39.277 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.277 00:19:39.277 Latency(us) 00:19:39.277 [2024-11-19T08:21:40.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.277 [2024-11-19T08:21:40.336Z] =================================================================================================================== 00:19:39.277 [2024-11-19T08:21:40.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.277 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1137936 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:39.536 rmmod nvme_tcp 00:19:39.536 rmmod nvme_fabrics 00:19:39.536 rmmod nvme_keyring 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1137899 ']' 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1137899 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1137899 ']' 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1137899 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1137899 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1137899' 00:19:39.536 killing process with pid 1137899 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1137899 00:19:39.536 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1137899 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.795 09:21:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uCKRBSuG5l /tmp/tmp.4EwbCTO8rp /tmp/tmp.mCU9m0uVNT 00:19:41.701 00:19:41.701 real 1m19.713s 00:19:41.701 user 2m2.035s 00:19:41.701 sys 0m30.678s 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.701 ************************************ 00:19:41.701 END TEST nvmf_tls 00:19:41.701 ************************************ 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:41.701 09:21:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.961 ************************************ 00:19:41.961 START TEST nvmf_fips 00:19:41.961 ************************************ 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:41.961 * Looking for test storage... 00:19:41.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:41.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.961 --rc genhtml_branch_coverage=1 00:19:41.961 --rc genhtml_function_coverage=1 00:19:41.961 --rc genhtml_legend=1 00:19:41.961 --rc geninfo_all_blocks=1 00:19:41.961 --rc geninfo_unexecuted_blocks=1 00:19:41.961 00:19:41.961 ' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:41.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.961 --rc genhtml_branch_coverage=1 00:19:41.961 --rc genhtml_function_coverage=1 00:19:41.961 --rc genhtml_legend=1 00:19:41.961 --rc geninfo_all_blocks=1 00:19:41.961 --rc geninfo_unexecuted_blocks=1 00:19:41.961 00:19:41.961 ' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:41.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.961 --rc genhtml_branch_coverage=1 00:19:41.961 --rc genhtml_function_coverage=1 00:19:41.961 --rc genhtml_legend=1 00:19:41.961 --rc geninfo_all_blocks=1 00:19:41.961 --rc geninfo_unexecuted_blocks=1 00:19:41.961 00:19:41.961 ' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:41.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.961 --rc genhtml_branch_coverage=1 00:19:41.961 --rc genhtml_function_coverage=1 00:19:41.961 --rc genhtml_legend=1 00:19:41.961 --rc geninfo_all_blocks=1 00:19:41.961 --rc geninfo_unexecuted_blocks=1 00:19:41.961 00:19:41.961 ' 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.961 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.962 09:21:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:41.962 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:42.221 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:42.222 Error setting digest 00:19:42.222 40B2E2342F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:42.222 40B2E2342F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:42.222 09:21:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.943 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.943 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.943 09:21:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.943 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.943 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.943 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.943 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:19:48.943 00:19:48.944 --- 10.0.0.2 ping statistics --- 00:19:48.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.944 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:48.944 00:19:48.944 --- 10.0.0.1 ping statistics --- 00:19:48.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.944 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1141962 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1141962 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1141962 ']' 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.944 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 [2024-11-19 09:21:49.175254] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:48.944 [2024-11-19 09:21:49.175304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.944 [2024-11-19 09:21:49.255771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.944 [2024-11-19 09:21:49.296955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.944 [2024-11-19 09:21:49.296992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.944 [2024-11-19 09:21:49.297000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.944 [2024-11-19 09:21:49.297006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.944 [2024-11-19 09:21:49.297011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.944 [2024-11-19 09:21:49.297563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.201 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.201 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:49.201 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.201 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.201 09:21:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.wJc 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.wJc 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.wJc 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.wJc 00:19:49.201 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:49.201 [2024-11-19 09:21:50.218611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.201 [2024-11-19 09:21:50.234607] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.201 [2024-11-19 09:21:50.234792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.459 malloc0 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1142213 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1142213 /var/tmp/bdevperf.sock 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1142213 ']' 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.459 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:49.460 09:21:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:49.460 [2024-11-19 09:21:50.367973] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:19:49.460 [2024-11-19 09:21:50.368026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142213 ] 00:19:49.460 [2024-11-19 09:21:50.443027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.460 [2024-11-19 09:21:50.484369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.391 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:50.391 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:50.391 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.wJc 00:19:50.391 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.649 [2024-11-19 09:21:51.538455] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.649 TLSTESTn1 00:19:50.649 09:21:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:50.906 Running I/O for 10 seconds... 00:19:52.770 5332.00 IOPS, 20.83 MiB/s [2024-11-19T08:21:54.762Z] 5491.50 IOPS, 21.45 MiB/s [2024-11-19T08:21:56.134Z] 5487.67 IOPS, 21.44 MiB/s [2024-11-19T08:21:57.066Z] 5515.75 IOPS, 21.55 MiB/s [2024-11-19T08:21:57.998Z] 5499.80 IOPS, 21.48 MiB/s [2024-11-19T08:21:58.930Z] 5521.17 IOPS, 21.57 MiB/s [2024-11-19T08:21:59.862Z] 5505.86 IOPS, 21.51 MiB/s [2024-11-19T08:22:00.795Z] 5525.00 IOPS, 21.58 MiB/s [2024-11-19T08:22:02.169Z] 5484.89 IOPS, 21.43 MiB/s [2024-11-19T08:22:02.169Z] 5486.80 IOPS, 21.43 MiB/s 00:20:01.110 Latency(us) 00:20:01.110 [2024-11-19T08:22:02.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.110 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.110 Verification LBA range: start 0x0 length 0x2000 00:20:01.110 TLSTESTn1 : 10.02 5489.84 21.44 0.00 0.00 23278.67 5983.72 24618.74 00:20:01.110 [2024-11-19T08:22:02.169Z] =================================================================================================================== 00:20:01.110 [2024-11-19T08:22:02.169Z] Total : 5489.84 21.44 0.00 0.00 23278.67 5983.72 24618.74 00:20:01.110 { 00:20:01.110 "results": [ 00:20:01.110 { 00:20:01.110 "job": "TLSTESTn1", 00:20:01.110 "core_mask": "0x4", 00:20:01.110 "workload": "verify", 00:20:01.110 "status": "finished", 00:20:01.110 "verify_range": { 00:20:01.110 "start": 0, 00:20:01.110 "length": 8192 00:20:01.110 }, 00:20:01.110 "queue_depth": 128, 00:20:01.111 "io_size": 4096, 00:20:01.111 "runtime": 10.017409, 00:20:01.111 "iops": 5489.842732786492, 00:20:01.111 "mibps": 21.444698174947234, 00:20:01.111 "io_failed": 0, 00:20:01.111 "io_timeout": 0, 00:20:01.111 "avg_latency_us": 23278.66738271843, 00:20:01.111 "min_latency_us": 5983.721739130435, 00:20:01.111 "max_latency_us": 24618.740869565216 00:20:01.111 } 00:20:01.111 ], 00:20:01.111 "core_count": 1 00:20:01.111 } 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:01.111 nvmf_trace.0 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1142213 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1142213 ']' 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1142213 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1142213 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1142213' 00:20:01.111 killing process with pid 1142213 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1142213 00:20:01.111 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.111 00:20:01.111 Latency(us) 00:20:01.111 [2024-11-19T08:22:02.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.111 [2024-11-19T08:22:02.170Z] =================================================================================================================== 00:20:01.111 [2024-11-19T08:22:02.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.111 09:22:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1142213 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.111 rmmod nvme_tcp 00:20:01.111 rmmod nvme_fabrics 00:20:01.111 rmmod nvme_keyring 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1141962 ']' 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1141962 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1141962 ']' 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1141962 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.111 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1141962 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1141962' 00:20:01.370 killing process with pid 1141962 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1141962 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1141962 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.370 09:22:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.wJc 00:20:03.904 00:20:03.904 real 0m21.689s 00:20:03.904 user 0m23.609s 00:20:03.904 sys 0m9.468s 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:03.904 ************************************ 00:20:03.904 END TEST nvmf_fips 00:20:03.904 ************************************ 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:03.904 ************************************ 00:20:03.904 START TEST nvmf_control_msg_list 00:20:03.904 ************************************ 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:03.904 * Looking for test storage... 00:20:03.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:03.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.904 --rc genhtml_branch_coverage=1 00:20:03.904 --rc genhtml_function_coverage=1 00:20:03.904 --rc genhtml_legend=1 00:20:03.904 --rc geninfo_all_blocks=1 00:20:03.904 --rc geninfo_unexecuted_blocks=1 00:20:03.904 00:20:03.904 ' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:03.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.904 --rc genhtml_branch_coverage=1 00:20:03.904 --rc genhtml_function_coverage=1 00:20:03.904 --rc genhtml_legend=1 00:20:03.904 --rc geninfo_all_blocks=1 00:20:03.904 --rc geninfo_unexecuted_blocks=1 00:20:03.904 00:20:03.904 ' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:03.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.904 --rc genhtml_branch_coverage=1 00:20:03.904 --rc genhtml_function_coverage=1 00:20:03.904 --rc genhtml_legend=1 00:20:03.904 --rc geninfo_all_blocks=1 00:20:03.904 --rc geninfo_unexecuted_blocks=1 00:20:03.904 00:20:03.904 ' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:03.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.904 --rc genhtml_branch_coverage=1 00:20:03.904 --rc genhtml_function_coverage=1 00:20:03.904 --rc genhtml_legend=1 00:20:03.904 --rc geninfo_all_blocks=1 00:20:03.904 --rc geninfo_unexecuted_blocks=1 00:20:03.904 00:20:03.904 ' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.904 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.905 09:22:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.470 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:10.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:10.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:10.471 Found net devices under 0000:86:00.0: cvl_0_0 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:10.471 Found net devices under 0000:86:00.1: cvl_0_1 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.471 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:10.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:20:10.472 00:20:10.472 --- 10.0.0.2 ping statistics --- 00:20:10.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.472 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:20:10.472 00:20:10.472 --- 10.0.0.1 ping statistics --- 00:20:10.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.472 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1147604 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1147604 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 1147604 ']' 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.472 [2024-11-19 09:22:10.718226] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:20:10.472 [2024-11-19 09:22:10.718272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.472 [2024-11-19 09:22:10.799585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.472 [2024-11-19 09:22:10.840128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.472 [2024-11-19 09:22:10.840164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.472 [2024-11-19 09:22:10.840172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.472 [2024-11-19 09:22:10.840178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.472 [2024-11-19 09:22:10.840183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.472 [2024-11-19 09:22:10.840738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.472 [2024-11-19 09:22:10.988199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.472 09:22:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:10.472 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.472 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.472 Malloc0 00:20:10.472 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:10.473 [2024-11-19 09:22:11.028517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1147740 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1147742 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1147744 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:10.473 09:22:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1147740 00:20:10.473 [2024-11-19 09:22:11.113063] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.473 [2024-11-19 09:22:11.113247] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.473 [2024-11-19 09:22:11.122903] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:11.405 Initializing NVMe Controllers 00:20:11.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:11.405 Initialization complete. Launching workers. 00:20:11.405 ======================================================== 00:20:11.405 Latency(us) 00:20:11.405 Device Information : IOPS MiB/s Average min max 00:20:11.406 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6233.00 24.35 160.07 137.05 360.16 00:20:11.406 ======================================================== 00:20:11.406 Total : 6233.00 24.35 160.07 137.05 360.16 00:20:11.406 00:20:11.406 Initializing NVMe Controllers 00:20:11.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:11.406 Initialization complete. Launching workers. 00:20:11.406 ======================================================== 00:20:11.406 Latency(us) 00:20:11.406 Device Information : IOPS MiB/s Average min max 00:20:11.406 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6259.00 24.45 159.39 128.75 361.28 00:20:11.406 ======================================================== 00:20:11.406 Total : 6259.00 24.45 159.39 128.75 361.28 00:20:11.406 00:20:11.406 Initializing NVMe Controllers 00:20:11.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:11.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:11.406 Initialization complete. Launching workers. 00:20:11.406 ======================================================== 00:20:11.406 Latency(us) 00:20:11.406 Device Information : IOPS MiB/s Average min max 00:20:11.406 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40963.96 40676.35 41886.60 00:20:11.406 ======================================================== 00:20:11.406 Total : 25.00 0.10 40963.96 40676.35 41886.60 00:20:11.406 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1147742 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1147744 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.406 rmmod nvme_tcp 00:20:11.406 rmmod nvme_fabrics 00:20:11.406 rmmod nvme_keyring 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1147604 ']' 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1147604 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 1147604 ']' 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 1147604 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.406 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1147604 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1147604' 00:20:11.664 killing process with pid 1147604 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 1147604 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 1147604 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.664 09:22:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.198 00:20:14.198 real 0m10.179s 00:20:14.198 user 0m6.579s 00:20:14.198 sys 0m5.561s 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:14.198 ************************************ 00:20:14.198 END TEST nvmf_control_msg_list 00:20:14.198 ************************************ 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.198 ************************************ 00:20:14.198 START TEST nvmf_wait_for_buf 00:20:14.198 ************************************ 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:14.198 * Looking for test storage... 00:20:14.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.198 --rc genhtml_branch_coverage=1 00:20:14.198 --rc genhtml_function_coverage=1 00:20:14.198 --rc genhtml_legend=1 00:20:14.198 --rc geninfo_all_blocks=1 00:20:14.198 --rc geninfo_unexecuted_blocks=1 00:20:14.198 00:20:14.198 ' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.198 --rc genhtml_branch_coverage=1 00:20:14.198 --rc genhtml_function_coverage=1 00:20:14.198 --rc genhtml_legend=1 00:20:14.198 --rc geninfo_all_blocks=1 00:20:14.198 --rc geninfo_unexecuted_blocks=1 00:20:14.198 00:20:14.198 ' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.198 --rc genhtml_branch_coverage=1 00:20:14.198 --rc genhtml_function_coverage=1 00:20:14.198 --rc genhtml_legend=1 00:20:14.198 --rc geninfo_all_blocks=1 00:20:14.198 --rc geninfo_unexecuted_blocks=1 00:20:14.198 00:20:14.198 ' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.198 --rc genhtml_branch_coverage=1 00:20:14.198 --rc genhtml_function_coverage=1 00:20:14.198 --rc genhtml_legend=1 00:20:14.198 --rc geninfo_all_blocks=1 00:20:14.198 --rc geninfo_unexecuted_blocks=1 00:20:14.198 00:20:14.198 ' 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.198 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.199 09:22:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:20.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:20.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:20.767 Found net devices under 0000:86:00.0: cvl_0_0 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:20.767 Found net devices under 0000:86:00.1: cvl_0_1 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:20.767 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:20.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:20:20.767 00:20:20.767 --- 10.0.0.2 ping statistics --- 00:20:20.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.768 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:20.768 00:20:20.768 --- 10.0.0.1 ping statistics --- 00:20:20.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.768 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1151399 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1151399 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 1151399 ']' 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.768 09:22:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 [2024-11-19 09:22:20.965383] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:20:20.768 [2024-11-19 09:22:20.965435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.768 [2024-11-19 09:22:21.046008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.768 [2024-11-19 09:22:21.086598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.768 [2024-11-19 09:22:21.086635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.768 [2024-11-19 09:22:21.086644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.768 [2024-11-19 09:22:21.086651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.768 [2024-11-19 09:22:21.086656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.768 [2024-11-19 09:22:21.087275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 Malloc0 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 [2024-11-19 09:22:21.264933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.768 [2024-11-19 09:22:21.289124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.768 09:22:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:20.768 [2024-11-19 09:22:21.374034] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:22.144 Initializing NVMe Controllers 00:20:22.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:22.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:22.144 Initialization complete. Launching workers. 00:20:22.144 ======================================================== 00:20:22.144 Latency(us) 00:20:22.144 Device Information : IOPS MiB/s Average min max 00:20:22.144 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.97 7281.87 63853.09 00:20:22.144 ======================================================== 00:20:22.144 Total : 129.00 16.12 32238.97 7281.87 63853.09 00:20:22.144 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:22.144 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.145 rmmod nvme_tcp 00:20:22.145 rmmod nvme_fabrics 00:20:22.145 rmmod nvme_keyring 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1151399 ']' 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1151399 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 1151399 ']' 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 1151399 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1151399 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1151399' 00:20:22.145 killing process with pid 1151399 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 1151399 00:20:22.145 09:22:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 1151399 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.145 09:22:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:24.676 00:20:24.676 real 0m10.426s 00:20:24.676 user 0m4.018s 00:20:24.676 sys 0m4.872s 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:24.676 ************************************ 00:20:24.676 END TEST nvmf_wait_for_buf 00:20:24.676 ************************************ 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:24.676 09:22:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:29.943 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.944 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.944 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:29.944 ************************************ 00:20:29.944 START TEST nvmf_perf_adq 00:20:29.944 ************************************ 00:20:29.944 09:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:30.202 * Looking for test storage... 00:20:30.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.202 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:30.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.203 --rc genhtml_branch_coverage=1 00:20:30.203 --rc genhtml_function_coverage=1 00:20:30.203 --rc genhtml_legend=1 00:20:30.203 --rc geninfo_all_blocks=1 00:20:30.203 --rc geninfo_unexecuted_blocks=1 00:20:30.203 00:20:30.203 ' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:30.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.203 --rc genhtml_branch_coverage=1 00:20:30.203 --rc genhtml_function_coverage=1 00:20:30.203 --rc genhtml_legend=1 00:20:30.203 --rc geninfo_all_blocks=1 00:20:30.203 --rc geninfo_unexecuted_blocks=1 00:20:30.203 00:20:30.203 ' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:30.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.203 --rc genhtml_branch_coverage=1 00:20:30.203 --rc genhtml_function_coverage=1 00:20:30.203 --rc genhtml_legend=1 00:20:30.203 --rc geninfo_all_blocks=1 00:20:30.203 --rc geninfo_unexecuted_blocks=1 00:20:30.203 00:20:30.203 ' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:30.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.203 --rc genhtml_branch_coverage=1 00:20:30.203 --rc genhtml_function_coverage=1 00:20:30.203 --rc genhtml_legend=1 00:20:30.203 --rc geninfo_all_blocks=1 00:20:30.203 --rc geninfo_unexecuted_blocks=1 00:20:30.203 00:20:30.203 ' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.203 09:22:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:36.765 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:36.765 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:36.765 Found net devices under 0000:86:00.0: cvl_0_0 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:36.765 Found net devices under 0000:86:00.1: cvl_0_1 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:36.765 09:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:37.023 09:22:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:38.926 09:22:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:44.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:44.202 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:44.202 Found net devices under 0000:86:00.0: cvl_0_0 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:44.202 Found net devices under 0000:86:00.1: cvl_0_1 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.202 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.203 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.203 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.203 09:22:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:20:44.203 00:20:44.203 --- 10.0.0.2 ping statistics --- 00:20:44.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.203 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:20:44.203 00:20:44.203 --- 10.0.0.1 ping statistics --- 00:20:44.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.203 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1159718 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1159718 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1159718 ']' 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.203 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 [2024-11-19 09:22:45.290162] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:20:44.462 [2024-11-19 09:22:45.290205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.462 [2024-11-19 09:22:45.370241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.462 [2024-11-19 09:22:45.414462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.462 [2024-11-19 09:22:45.414498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.462 [2024-11-19 09:22:45.414506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.462 [2024-11-19 09:22:45.414512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.462 [2024-11-19 09:22:45.414517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.462 [2024-11-19 09:22:45.415900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.462 [2024-11-19 09:22:45.415935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.462 [2024-11-19 09:22:45.416042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.462 [2024-11-19 09:22:45.416042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:44.721 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 [2024-11-19 09:22:45.626106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 Malloc1 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 [2024-11-19 09:22:45.694177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1159962 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:44.722 09:22:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:47.250 "tick_rate": 2300000000, 00:20:47.250 "poll_groups": [ 00:20:47.250 { 00:20:47.250 "name": "nvmf_tgt_poll_group_000", 00:20:47.250 "admin_qpairs": 1, 00:20:47.250 "io_qpairs": 1, 00:20:47.250 "current_admin_qpairs": 1, 00:20:47.250 "current_io_qpairs": 1, 00:20:47.250 "pending_bdev_io": 0, 00:20:47.250 "completed_nvme_io": 19149, 00:20:47.250 "transports": [ 00:20:47.250 { 00:20:47.250 "trtype": "TCP" 00:20:47.250 } 00:20:47.250 ] 00:20:47.250 }, 00:20:47.250 { 00:20:47.250 "name": "nvmf_tgt_poll_group_001", 00:20:47.250 "admin_qpairs": 0, 00:20:47.250 "io_qpairs": 1, 00:20:47.250 "current_admin_qpairs": 0, 00:20:47.250 "current_io_qpairs": 1, 00:20:47.250 "pending_bdev_io": 0, 00:20:47.250 "completed_nvme_io": 19110, 00:20:47.250 "transports": [ 00:20:47.250 { 00:20:47.250 "trtype": "TCP" 00:20:47.250 } 00:20:47.250 ] 00:20:47.250 }, 00:20:47.250 { 00:20:47.250 "name": "nvmf_tgt_poll_group_002", 00:20:47.250 "admin_qpairs": 0, 00:20:47.250 "io_qpairs": 1, 00:20:47.250 "current_admin_qpairs": 0, 00:20:47.250 "current_io_qpairs": 1, 00:20:47.250 "pending_bdev_io": 0, 00:20:47.250 "completed_nvme_io": 19034, 00:20:47.250 "transports": [ 00:20:47.250 { 00:20:47.250 "trtype": "TCP" 00:20:47.250 } 00:20:47.250 ] 00:20:47.250 }, 00:20:47.250 { 00:20:47.250 "name": "nvmf_tgt_poll_group_003", 00:20:47.250 "admin_qpairs": 0, 00:20:47.250 "io_qpairs": 1, 00:20:47.250 "current_admin_qpairs": 0, 00:20:47.250 "current_io_qpairs": 1, 00:20:47.250 "pending_bdev_io": 0, 00:20:47.250 "completed_nvme_io": 18809, 00:20:47.250 "transports": [ 00:20:47.250 { 00:20:47.250 "trtype": "TCP" 00:20:47.250 } 00:20:47.250 ] 00:20:47.250 } 00:20:47.250 ] 00:20:47.250 }' 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:47.250 09:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1159962 00:20:55.358 Initializing NVMe Controllers 00:20:55.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:55.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:55.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:55.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:55.358 Initialization complete. Launching workers. 00:20:55.358 ======================================================== 00:20:55.358 Latency(us) 00:20:55.358 Device Information : IOPS MiB/s Average min max 00:20:55.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10086.60 39.40 6346.37 2145.44 11563.41 00:20:55.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10272.20 40.13 6232.14 2431.89 10506.20 00:20:55.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10259.80 40.08 6239.35 2384.10 10623.54 00:20:55.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10126.40 39.56 6320.75 2316.14 11036.09 00:20:55.358 ======================================================== 00:20:55.358 Total : 40744.99 159.16 6284.26 2145.44 11563.41 00:20:55.358 00:20:55.358 [2024-11-19 09:22:55.862371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1740660 is same with the state(6) to be set 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.358 rmmod nvme_tcp 00:20:55.358 rmmod nvme_fabrics 00:20:55.358 rmmod nvme_keyring 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1159718 ']' 00:20:55.358 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1159718 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1159718 ']' 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1159718 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1159718 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1159718' 00:20:55.359 killing process with pid 1159718 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1159718 00:20:55.359 09:22:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1159718 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.359 09:22:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.264 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:57.264 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:57.265 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:57.265 09:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:58.641 09:22:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:00.545 09:23:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:05.822 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:05.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:05.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:05.823 Found net devices under 0000:86:00.0: cvl_0_0 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:05.823 Found net devices under 0000:86:00.1: cvl_0_1 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:05.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:21:05.823 00:21:05.823 --- 10.0.0.2 ping statistics --- 00:21:05.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.823 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:05.823 00:21:05.823 --- 10.0.0.1 ping statistics --- 00:21:05.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.823 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.823 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:05.824 net.core.busy_poll = 1 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:05.824 net.core.busy_read = 1 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1163561 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1163561 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1163561 ']' 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.824 09:23:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.824 [2024-11-19 09:23:06.860208] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:05.824 [2024-11-19 09:23:06.860261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.083 [2024-11-19 09:23:06.940618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.083 [2024-11-19 09:23:06.986405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.083 [2024-11-19 09:23:06.986436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.083 [2024-11-19 09:23:06.986443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.083 [2024-11-19 09:23:06.986449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.083 [2024-11-19 09:23:06.986455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.083 [2024-11-19 09:23:06.987854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.083 [2024-11-19 09:23:06.987975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.083 [2024-11-19 09:23:06.988082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.083 [2024-11-19 09:23:06.988082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.083 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 [2024-11-19 09:23:07.185697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 Malloc1 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.341 [2024-11-19 09:23:07.242596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1163769 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:06.341 09:23:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:08.245 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:08.245 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.245 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.245 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.245 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:08.245 "tick_rate": 2300000000, 00:21:08.245 "poll_groups": [ 00:21:08.245 { 00:21:08.245 "name": "nvmf_tgt_poll_group_000", 00:21:08.245 "admin_qpairs": 1, 00:21:08.245 "io_qpairs": 2, 00:21:08.245 "current_admin_qpairs": 1, 00:21:08.246 "current_io_qpairs": 2, 00:21:08.246 "pending_bdev_io": 0, 00:21:08.246 "completed_nvme_io": 28179, 00:21:08.246 "transports": [ 00:21:08.246 { 00:21:08.246 "trtype": "TCP" 00:21:08.246 } 00:21:08.246 ] 00:21:08.246 }, 00:21:08.246 { 00:21:08.246 "name": "nvmf_tgt_poll_group_001", 00:21:08.246 "admin_qpairs": 0, 00:21:08.246 "io_qpairs": 2, 00:21:08.246 "current_admin_qpairs": 0, 00:21:08.246 "current_io_qpairs": 2, 00:21:08.246 "pending_bdev_io": 0, 00:21:08.246 "completed_nvme_io": 28295, 00:21:08.246 "transports": [ 00:21:08.246 { 00:21:08.246 "trtype": "TCP" 00:21:08.246 } 00:21:08.246 ] 00:21:08.246 }, 00:21:08.246 { 00:21:08.246 "name": "nvmf_tgt_poll_group_002", 00:21:08.246 "admin_qpairs": 0, 00:21:08.246 "io_qpairs": 0, 00:21:08.246 "current_admin_qpairs": 0, 00:21:08.246 "current_io_qpairs": 0, 00:21:08.246 "pending_bdev_io": 0, 00:21:08.246 "completed_nvme_io": 0, 00:21:08.246 "transports": [ 00:21:08.246 { 00:21:08.246 "trtype": "TCP" 00:21:08.246 } 00:21:08.246 ] 00:21:08.246 }, 00:21:08.246 { 00:21:08.246 "name": "nvmf_tgt_poll_group_003", 00:21:08.246 "admin_qpairs": 0, 00:21:08.246 "io_qpairs": 0, 00:21:08.246 "current_admin_qpairs": 0, 00:21:08.246 "current_io_qpairs": 0, 00:21:08.246 "pending_bdev_io": 0, 00:21:08.246 "completed_nvme_io": 0, 00:21:08.246 "transports": [ 00:21:08.246 { 00:21:08.246 "trtype": "TCP" 00:21:08.246 } 00:21:08.246 ] 00:21:08.246 } 00:21:08.246 ] 00:21:08.246 }' 00:21:08.246 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:08.246 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:08.506 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:08.506 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:08.506 09:23:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1163769 00:21:16.673 Initializing NVMe Controllers 00:21:16.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:16.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:16.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:16.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:16.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:16.673 Initialization complete. Launching workers. 00:21:16.673 ======================================================== 00:21:16.673 Latency(us) 00:21:16.673 Device Information : IOPS MiB/s Average min max 00:21:16.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6678.48 26.09 9610.29 1552.90 53497.73 00:21:16.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7189.08 28.08 8930.97 1276.18 53360.02 00:21:16.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7443.88 29.08 8598.33 1524.12 53366.10 00:21:16.673 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7983.87 31.19 8032.21 1057.39 55034.23 00:21:16.673 ======================================================== 00:21:16.673 Total : 29295.31 114.43 8756.37 1057.39 55034.23 00:21:16.673 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.673 rmmod nvme_tcp 00:21:16.673 rmmod nvme_fabrics 00:21:16.673 rmmod nvme_keyring 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1163561 ']' 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1163561 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1163561 ']' 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1163561 00:21:16.673 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1163561 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1163561' 00:21:16.674 killing process with pid 1163561 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1163561 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1163561 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.674 09:23:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:19.210 00:21:19.210 real 0m48.865s 00:21:19.210 user 2m43.598s 00:21:19.210 sys 0m10.425s 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.210 ************************************ 00:21:19.210 END TEST nvmf_perf_adq 00:21:19.210 ************************************ 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:19.210 ************************************ 00:21:19.210 START TEST nvmf_shutdown 00:21:19.210 ************************************ 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:19.210 * Looking for test storage... 00:21:19.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:19.210 09:23:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.210 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.211 --rc genhtml_branch_coverage=1 00:21:19.211 --rc genhtml_function_coverage=1 00:21:19.211 --rc genhtml_legend=1 00:21:19.211 --rc geninfo_all_blocks=1 00:21:19.211 --rc geninfo_unexecuted_blocks=1 00:21:19.211 00:21:19.211 ' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.211 --rc genhtml_branch_coverage=1 00:21:19.211 --rc genhtml_function_coverage=1 00:21:19.211 --rc genhtml_legend=1 00:21:19.211 --rc geninfo_all_blocks=1 00:21:19.211 --rc geninfo_unexecuted_blocks=1 00:21:19.211 00:21:19.211 ' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.211 --rc genhtml_branch_coverage=1 00:21:19.211 --rc genhtml_function_coverage=1 00:21:19.211 --rc genhtml_legend=1 00:21:19.211 --rc geninfo_all_blocks=1 00:21:19.211 --rc geninfo_unexecuted_blocks=1 00:21:19.211 00:21:19.211 ' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:19.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.211 --rc genhtml_branch_coverage=1 00:21:19.211 --rc genhtml_function_coverage=1 00:21:19.211 --rc genhtml_legend=1 00:21:19.211 --rc geninfo_all_blocks=1 00:21:19.211 --rc geninfo_unexecuted_blocks=1 00:21:19.211 00:21:19.211 ' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:19.211 ************************************ 00:21:19.211 START TEST nvmf_shutdown_tc1 00:21:19.211 ************************************ 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.211 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.212 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.212 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.212 09:23:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.782 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.782 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.782 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.783 09:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:21:25.783 00:21:25.783 --- 10.0.0.2 ping statistics --- 00:21:25.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.783 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:21:25.783 00:21:25.783 --- 10.0.0.1 ping statistics --- 00:21:25.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.783 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1168991 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1168991 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1168991 ']' 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 [2024-11-19 09:23:26.170824] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:25.783 [2024-11-19 09:23:26.170874] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.783 [2024-11-19 09:23:26.250255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.783 [2024-11-19 09:23:26.293567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.783 [2024-11-19 09:23:26.293602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.783 [2024-11-19 09:23:26.293610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.783 [2024-11-19 09:23:26.293616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.783 [2024-11-19 09:23:26.293621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.783 [2024-11-19 09:23:26.295069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.783 [2024-11-19 09:23:26.295180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.783 [2024-11-19 09:23:26.295285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.783 [2024-11-19 09:23:26.295286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 [2024-11-19 09:23:26.436566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.783 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.783 Malloc1 00:21:25.783 [2024-11-19 09:23:26.549588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.783 Malloc2 00:21:25.783 Malloc3 00:21:25.783 Malloc4 00:21:25.783 Malloc5 00:21:25.783 Malloc6 00:21:25.783 Malloc7 00:21:25.783 Malloc8 00:21:26.043 Malloc9 00:21:26.043 Malloc10 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1169143 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1169143 /var/tmp/bdevperf.sock 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1169143 ']' 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.043 { 00:21:26.043 "params": { 00:21:26.043 "name": "Nvme$subsystem", 00:21:26.043 "trtype": "$TEST_TRANSPORT", 00:21:26.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.043 "adrfam": "ipv4", 00:21:26.043 "trsvcid": "$NVMF_PORT", 00:21:26.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.043 "hdgst": ${hdgst:-false}, 00:21:26.043 "ddgst": ${ddgst:-false} 00:21:26.043 }, 00:21:26.043 "method": "bdev_nvme_attach_controller" 00:21:26.043 } 00:21:26.043 EOF 00:21:26.043 )") 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.043 { 00:21:26.043 "params": { 00:21:26.043 "name": "Nvme$subsystem", 00:21:26.043 "trtype": "$TEST_TRANSPORT", 00:21:26.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.043 "adrfam": "ipv4", 00:21:26.043 "trsvcid": "$NVMF_PORT", 00:21:26.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.043 "hdgst": ${hdgst:-false}, 00:21:26.043 "ddgst": ${ddgst:-false} 00:21:26.043 }, 00:21:26.043 "method": "bdev_nvme_attach_controller" 00:21:26.043 } 00:21:26.043 EOF 00:21:26.043 )") 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.043 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.043 { 00:21:26.043 "params": { 00:21:26.043 "name": "Nvme$subsystem", 00:21:26.043 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 [2024-11-19 09:23:27.024809] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:26.044 [2024-11-19 09:23:27.024856] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:26.044 { 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme$subsystem", 00:21:26.044 "trtype": "$TEST_TRANSPORT", 00:21:26.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "$NVMF_PORT", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:26.044 "hdgst": ${hdgst:-false}, 00:21:26.044 "ddgst": ${ddgst:-false} 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 } 00:21:26.044 EOF 00:21:26.044 )") 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:26.044 09:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme1", 00:21:26.044 "trtype": "tcp", 00:21:26.044 "traddr": "10.0.0.2", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "4420", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:26.044 "hdgst": false, 00:21:26.044 "ddgst": false 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 },{ 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme2", 00:21:26.044 "trtype": "tcp", 00:21:26.044 "traddr": "10.0.0.2", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "4420", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:26.044 "hdgst": false, 00:21:26.044 "ddgst": false 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 },{ 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme3", 00:21:26.044 "trtype": "tcp", 00:21:26.044 "traddr": "10.0.0.2", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "4420", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:26.044 "hdgst": false, 00:21:26.044 "ddgst": false 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 },{ 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme4", 00:21:26.044 "trtype": "tcp", 00:21:26.044 "traddr": "10.0.0.2", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "4420", 00:21:26.044 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:26.044 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:26.044 "hdgst": false, 00:21:26.044 "ddgst": false 00:21:26.044 }, 00:21:26.044 "method": "bdev_nvme_attach_controller" 00:21:26.044 },{ 00:21:26.044 "params": { 00:21:26.044 "name": "Nvme5", 00:21:26.044 "trtype": "tcp", 00:21:26.044 "traddr": "10.0.0.2", 00:21:26.044 "adrfam": "ipv4", 00:21:26.044 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:26.045 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false 00:21:26.045 }, 00:21:26.045 "method": "bdev_nvme_attach_controller" 00:21:26.045 },{ 00:21:26.045 "params": { 00:21:26.045 "name": "Nvme6", 00:21:26.045 "trtype": "tcp", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "adrfam": "ipv4", 00:21:26.045 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:26.045 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false 00:21:26.045 }, 00:21:26.045 "method": "bdev_nvme_attach_controller" 00:21:26.045 },{ 00:21:26.045 "params": { 00:21:26.045 "name": "Nvme7", 00:21:26.045 "trtype": "tcp", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "adrfam": "ipv4", 00:21:26.045 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:26.045 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false 00:21:26.045 }, 00:21:26.045 "method": "bdev_nvme_attach_controller" 00:21:26.045 },{ 00:21:26.045 "params": { 00:21:26.045 "name": "Nvme8", 00:21:26.045 "trtype": "tcp", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "adrfam": "ipv4", 00:21:26.045 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:26.045 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false 00:21:26.045 }, 00:21:26.045 "method": "bdev_nvme_attach_controller" 00:21:26.045 },{ 00:21:26.045 "params": { 00:21:26.045 "name": "Nvme9", 00:21:26.045 "trtype": "tcp", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "adrfam": "ipv4", 00:21:26.045 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:26.045 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false 00:21:26.045 }, 00:21:26.045 "method": "bdev_nvme_attach_controller" 00:21:26.045 },{ 00:21:26.045 "params": { 00:21:26.045 "name": "Nvme10", 00:21:26.045 "trtype": "tcp", 00:21:26.045 "traddr": "10.0.0.2", 00:21:26.045 "adrfam": "ipv4", 00:21:26.045 "trsvcid": "4420", 00:21:26.045 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:26.045 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:26.045 "hdgst": false, 00:21:26.045 "ddgst": false 00:21:26.045 }, 00:21:26.045 "method": "bdev_nvme_attach_controller" 00:21:26.045 }' 00:21:26.304 [2024-11-19 09:23:27.101153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.304 [2024-11-19 09:23:27.142967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1169143 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:28.209 09:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:29.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1169143 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1168991 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.253 "method": "bdev_nvme_attach_controller" 00:21:29.253 } 00:21:29.253 EOF 00:21:29.253 )") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.253 "method": "bdev_nvme_attach_controller" 00:21:29.253 } 00:21:29.253 EOF 00:21:29.253 )") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.253 "method": "bdev_nvme_attach_controller" 00:21:29.253 } 00:21:29.253 EOF 00:21:29.253 )") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.253 "method": "bdev_nvme_attach_controller" 00:21:29.253 } 00:21:29.253 EOF 00:21:29.253 )") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.253 "method": "bdev_nvme_attach_controller" 00:21:29.253 } 00:21:29.253 EOF 00:21:29.253 )") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.253 "method": "bdev_nvme_attach_controller" 00:21:29.253 } 00:21:29.253 EOF 00:21:29.253 )") 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.253 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.253 { 00:21:29.253 "params": { 00:21:29.253 "name": "Nvme$subsystem", 00:21:29.253 "trtype": "$TEST_TRANSPORT", 00:21:29.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.253 "adrfam": "ipv4", 00:21:29.253 "trsvcid": "$NVMF_PORT", 00:21:29.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.253 "hdgst": ${hdgst:-false}, 00:21:29.253 "ddgst": ${ddgst:-false} 00:21:29.253 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 } 00:21:29.254 EOF 00:21:29.254 )") 00:21:29.254 [2024-11-19 09:23:29.963105] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:29.254 [2024-11-19 09:23:29.963155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169684 ] 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.254 { 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme$subsystem", 00:21:29.254 "trtype": "$TEST_TRANSPORT", 00:21:29.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "$NVMF_PORT", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.254 "hdgst": ${hdgst:-false}, 00:21:29.254 "ddgst": ${ddgst:-false} 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 } 00:21:29.254 EOF 00:21:29.254 )") 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.254 { 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme$subsystem", 00:21:29.254 "trtype": "$TEST_TRANSPORT", 00:21:29.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "$NVMF_PORT", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.254 "hdgst": ${hdgst:-false}, 00:21:29.254 "ddgst": ${ddgst:-false} 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 } 00:21:29.254 EOF 00:21:29.254 )") 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:29.254 { 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme$subsystem", 00:21:29.254 "trtype": "$TEST_TRANSPORT", 00:21:29.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "$NVMF_PORT", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.254 "hdgst": ${hdgst:-false}, 00:21:29.254 "ddgst": ${ddgst:-false} 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 } 00:21:29.254 EOF 00:21:29.254 )") 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:29.254 09:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme1", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme2", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme3", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme4", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme5", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme6", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme7", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme8", 00:21:29.254 "trtype": "tcp", 00:21:29.254 "traddr": "10.0.0.2", 00:21:29.254 "adrfam": "ipv4", 00:21:29.254 "trsvcid": "4420", 00:21:29.254 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.254 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.254 "hdgst": false, 00:21:29.254 "ddgst": false 00:21:29.254 }, 00:21:29.254 "method": "bdev_nvme_attach_controller" 00:21:29.254 },{ 00:21:29.254 "params": { 00:21:29.254 "name": "Nvme9", 00:21:29.254 "trtype": "tcp", 00:21:29.255 "traddr": "10.0.0.2", 00:21:29.255 "adrfam": "ipv4", 00:21:29.255 "trsvcid": "4420", 00:21:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.255 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.255 "hdgst": false, 00:21:29.255 "ddgst": false 00:21:29.255 }, 00:21:29.255 "method": "bdev_nvme_attach_controller" 00:21:29.255 },{ 00:21:29.255 "params": { 00:21:29.255 "name": "Nvme10", 00:21:29.255 "trtype": "tcp", 00:21:29.255 "traddr": "10.0.0.2", 00:21:29.255 "adrfam": "ipv4", 00:21:29.255 "trsvcid": "4420", 00:21:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.255 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.255 "hdgst": false, 00:21:29.255 "ddgst": false 00:21:29.255 }, 00:21:29.255 "method": "bdev_nvme_attach_controller" 00:21:29.255 }' 00:21:29.255 [2024-11-19 09:23:30.045004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.255 [2024-11-19 09:23:30.092224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.726 Running I/O for 1 seconds... 00:21:31.663 2180.00 IOPS, 136.25 MiB/s 00:21:31.663 Latency(us) 00:21:31.663 [2024-11-19T08:23:32.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.663 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme1n1 : 1.14 281.83 17.61 0.00 0.00 224758.56 27696.08 206979.78 00:21:31.663 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme2n1 : 1.09 235.52 14.72 0.00 0.00 264581.57 18008.15 221568.67 00:21:31.663 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme3n1 : 1.11 292.19 18.26 0.00 0.00 206608.75 14474.91 214274.23 00:21:31.663 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme4n1 : 1.14 280.28 17.52 0.00 0.00 216461.31 13107.20 220656.86 00:21:31.663 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme5n1 : 1.08 236.97 14.81 0.00 0.00 251408.03 17438.27 229774.91 00:21:31.663 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme6n1 : 1.15 277.12 17.32 0.00 0.00 212766.54 16868.40 217921.45 00:21:31.663 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme7n1 : 1.15 277.65 17.35 0.00 0.00 209105.25 14702.86 235245.75 00:21:31.663 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme8n1 : 1.15 279.04 17.44 0.00 0.00 204693.86 15842.62 216097.84 00:21:31.663 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme9n1 : 1.16 275.89 17.24 0.00 0.00 204208.93 17210.32 223392.28 00:21:31.663 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.663 Verification LBA range: start 0x0 length 0x400 00:21:31.663 Nvme10n1 : 1.16 279.46 17.47 0.00 0.00 198364.81 1282.23 237069.36 00:21:31.663 [2024-11-19T08:23:32.722Z] =================================================================================================================== 00:21:31.663 [2024-11-19T08:23:32.722Z] Total : 2715.95 169.75 0.00 0.00 217637.57 1282.23 237069.36 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.922 rmmod nvme_tcp 00:21:31.922 rmmod nvme_fabrics 00:21:31.922 rmmod nvme_keyring 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1168991 ']' 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1168991 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1168991 ']' 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1168991 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1168991 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1168991' 00:21:31.922 killing process with pid 1168991 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1168991 00:21:31.922 09:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1168991 00:21:32.181 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.181 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.181 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.182 09:23:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.717 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.717 00:21:34.717 real 0m15.172s 00:21:34.717 user 0m33.395s 00:21:34.717 sys 0m5.889s 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:34.718 ************************************ 00:21:34.718 END TEST nvmf_shutdown_tc1 00:21:34.718 ************************************ 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.718 ************************************ 00:21:34.718 START TEST nvmf_shutdown_tc2 00:21:34.718 ************************************ 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:34.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:34.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:34.718 Found net devices under 0000:86:00.0: cvl_0_0 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:34.718 Found net devices under 0000:86:00.1: cvl_0_1 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:34.718 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:34.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:21:34.719 00:21:34.719 --- 10.0.0.2 ping statistics --- 00:21:34.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.719 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:21:34.719 00:21:34.719 --- 10.0.0.1 ping statistics --- 00:21:34.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.719 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1170726 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1170726 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1170726 ']' 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:34.719 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.719 [2024-11-19 09:23:35.742376] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:34.719 [2024-11-19 09:23:35.742428] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.978 [2024-11-19 09:23:35.822353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.978 [2024-11-19 09:23:35.864368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.978 [2024-11-19 09:23:35.864406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.978 [2024-11-19 09:23:35.864413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.978 [2024-11-19 09:23:35.864419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.978 [2024-11-19 09:23:35.864425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.978 [2024-11-19 09:23:35.866027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.978 [2024-11-19 09:23:35.866114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.978 [2024-11-19 09:23:35.866224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.978 [2024-11-19 09:23:35.866225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.978 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:34.978 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:34.978 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.978 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.978 09:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.978 [2024-11-19 09:23:36.010877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.978 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.237 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.237 Malloc1 00:21:35.237 [2024-11-19 09:23:36.121468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.237 Malloc2 00:21:35.237 Malloc3 00:21:35.237 Malloc4 00:21:35.237 Malloc5 00:21:35.496 Malloc6 00:21:35.496 Malloc7 00:21:35.496 Malloc8 00:21:35.496 Malloc9 00:21:35.496 Malloc10 00:21:35.496 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.496 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:35.496 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.496 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1170843 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1170843 /var/tmp/bdevperf.sock 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1170843 ']' 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.756 "ddgst": ${ddgst:-false} 00:21:35.756 }, 00:21:35.756 "method": "bdev_nvme_attach_controller" 00:21:35.756 } 00:21:35.756 EOF 00:21:35.756 )") 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.756 "ddgst": ${ddgst:-false} 00:21:35.756 }, 00:21:35.756 "method": "bdev_nvme_attach_controller" 00:21:35.756 } 00:21:35.756 EOF 00:21:35.756 )") 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.756 "ddgst": ${ddgst:-false} 00:21:35.756 }, 00:21:35.756 "method": "bdev_nvme_attach_controller" 00:21:35.756 } 00:21:35.756 EOF 00:21:35.756 )") 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.756 "ddgst": ${ddgst:-false} 00:21:35.756 }, 00:21:35.756 "method": "bdev_nvme_attach_controller" 00:21:35.756 } 00:21:35.756 EOF 00:21:35.756 )") 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.756 "ddgst": ${ddgst:-false} 00:21:35.756 }, 00:21:35.756 "method": "bdev_nvme_attach_controller" 00:21:35.756 } 00:21:35.756 EOF 00:21:35.756 )") 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.756 "ddgst": ${ddgst:-false} 00:21:35.756 }, 00:21:35.756 "method": "bdev_nvme_attach_controller" 00:21:35.756 } 00:21:35.756 EOF 00:21:35.756 )") 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.756 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.756 { 00:21:35.756 "params": { 00:21:35.756 "name": "Nvme$subsystem", 00:21:35.756 "trtype": "$TEST_TRANSPORT", 00:21:35.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.756 "adrfam": "ipv4", 00:21:35.756 "trsvcid": "$NVMF_PORT", 00:21:35.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.756 "hdgst": ${hdgst:-false}, 00:21:35.757 "ddgst": ${ddgst:-false} 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 } 00:21:35.757 EOF 00:21:35.757 )") 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.757 [2024-11-19 09:23:36.597679] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:35.757 [2024-11-19 09:23:36.597724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170843 ] 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.757 { 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme$subsystem", 00:21:35.757 "trtype": "$TEST_TRANSPORT", 00:21:35.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "$NVMF_PORT", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.757 "hdgst": ${hdgst:-false}, 00:21:35.757 "ddgst": ${ddgst:-false} 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 } 00:21:35.757 EOF 00:21:35.757 )") 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.757 { 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme$subsystem", 00:21:35.757 "trtype": "$TEST_TRANSPORT", 00:21:35.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "$NVMF_PORT", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.757 "hdgst": ${hdgst:-false}, 00:21:35.757 "ddgst": ${ddgst:-false} 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 } 00:21:35.757 EOF 00:21:35.757 )") 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:35.757 { 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme$subsystem", 00:21:35.757 "trtype": "$TEST_TRANSPORT", 00:21:35.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "$NVMF_PORT", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.757 "hdgst": ${hdgst:-false}, 00:21:35.757 "ddgst": ${ddgst:-false} 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 } 00:21:35.757 EOF 00:21:35.757 )") 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:35.757 09:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme1", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme2", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme3", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme4", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme5", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme6", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme7", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme8", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme9", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 },{ 00:21:35.757 "params": { 00:21:35.757 "name": "Nvme10", 00:21:35.757 "trtype": "tcp", 00:21:35.757 "traddr": "10.0.0.2", 00:21:35.757 "adrfam": "ipv4", 00:21:35.757 "trsvcid": "4420", 00:21:35.757 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:35.757 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:35.757 "hdgst": false, 00:21:35.757 "ddgst": false 00:21:35.757 }, 00:21:35.757 "method": "bdev_nvme_attach_controller" 00:21:35.757 }' 00:21:35.757 [2024-11-19 09:23:36.674426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.757 [2024-11-19 09:23:36.715663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.134 Running I/O for 10 seconds... 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1170843 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1170843 ']' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1170843 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1170843 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1170843' 00:21:37.702 killing process with pid 1170843 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1170843 00:21:37.702 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1170843 00:21:37.702 Received shutdown signal, test time was about 0.679261 seconds 00:21:37.702 00:21:37.702 Latency(us) 00:21:37.702 [2024-11-19T08:23:38.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.702 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme1n1 : 0.67 287.33 17.96 0.00 0.00 219119.97 42626.89 193302.71 00:21:37.702 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme2n1 : 0.68 282.94 17.68 0.00 0.00 217185.65 17210.32 227039.50 00:21:37.702 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme3n1 : 0.65 294.25 18.39 0.00 0.00 202924.15 25302.59 201508.95 00:21:37.702 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme4n1 : 0.66 292.55 18.28 0.00 0.00 199008.91 15956.59 207891.59 00:21:37.702 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme5n1 : 0.66 288.82 18.05 0.00 0.00 194789.14 29633.67 206067.98 00:21:37.702 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme6n1 : 0.66 289.87 18.12 0.00 0.00 190541.47 16298.52 215186.03 00:21:37.702 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme7n1 : 0.67 285.69 17.86 0.00 0.00 188604.48 16298.52 217009.64 00:21:37.702 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme8n1 : 0.68 284.44 17.78 0.00 0.00 184301.30 15386.71 209715.20 00:21:37.702 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme9n1 : 0.64 199.36 12.46 0.00 0.00 252420.45 32597.04 223392.28 00:21:37.702 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.702 Verification LBA range: start 0x0 length 0x400 00:21:37.702 Nvme10n1 : 0.65 204.59 12.79 0.00 0.00 236526.10 1837.86 238892.97 00:21:37.702 [2024-11-19T08:23:38.761Z] =================================================================================================================== 00:21:37.702 [2024-11-19T08:23:38.761Z] Total : 2709.85 169.37 0.00 0.00 206060.66 1837.86 238892.97 00:21:37.961 09:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1170726 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.898 rmmod nvme_tcp 00:21:38.898 rmmod nvme_fabrics 00:21:38.898 rmmod nvme_keyring 00:21:38.898 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.156 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:39.156 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:39.156 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1170726 ']' 00:21:39.156 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1170726 00:21:39.157 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1170726 ']' 00:21:39.157 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1170726 00:21:39.157 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:39.157 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:39.157 09:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1170726 00:21:39.157 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:39.157 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:39.157 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1170726' 00:21:39.157 killing process with pid 1170726 00:21:39.157 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1170726 00:21:39.157 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1170726 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.416 09:23:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:41.953 00:21:41.953 real 0m7.082s 00:21:41.953 user 0m20.159s 00:21:41.953 sys 0m1.292s 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:41.953 ************************************ 00:21:41.953 END TEST nvmf_shutdown_tc2 00:21:41.953 ************************************ 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:41.953 ************************************ 00:21:41.953 START TEST nvmf_shutdown_tc3 00:21:41.953 ************************************ 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.953 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.953 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.953 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.954 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.954 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:21:41.954 00:21:41.954 --- 10.0.0.2 ping statistics --- 00:21:41.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.954 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:41.954 00:21:41.954 --- 10.0.0.1 ping statistics --- 00:21:41.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.954 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1172010 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1172010 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1172010 ']' 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:41.954 09:23:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.954 [2024-11-19 09:23:42.881802] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:41.954 [2024-11-19 09:23:42.881850] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.954 [2024-11-19 09:23:42.961497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.954 [2024-11-19 09:23:43.004426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.954 [2024-11-19 09:23:43.004464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.954 [2024-11-19 09:23:43.004472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.954 [2024-11-19 09:23:43.004478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.954 [2024-11-19 09:23:43.004483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.954 [2024-11-19 09:23:43.006016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.954 [2024-11-19 09:23:43.006123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.954 [2024-11-19 09:23:43.006162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.954 [2024-11-19 09:23:43.006164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.214 [2024-11-19 09:23:43.147602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.214 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.214 Malloc1 00:21:42.214 [2024-11-19 09:23:43.263681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.473 Malloc2 00:21:42.473 Malloc3 00:21:42.473 Malloc4 00:21:42.473 Malloc5 00:21:42.473 Malloc6 00:21:42.473 Malloc7 00:21:42.732 Malloc8 00:21:42.732 Malloc9 00:21:42.732 Malloc10 00:21:42.732 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.732 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1172156 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1172156 /var/tmp/bdevperf.sock 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1172156 ']' 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.733 [2024-11-19 09:23:43.736276] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:42.733 [2024-11-19 09:23:43.736322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1172156 ] 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.733 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.733 { 00:21:42.733 "params": { 00:21:42.733 "name": "Nvme$subsystem", 00:21:42.733 "trtype": "$TEST_TRANSPORT", 00:21:42.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.733 "adrfam": "ipv4", 00:21:42.733 "trsvcid": "$NVMF_PORT", 00:21:42.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.733 "hdgst": ${hdgst:-false}, 00:21:42.733 "ddgst": ${ddgst:-false} 00:21:42.733 }, 00:21:42.733 "method": "bdev_nvme_attach_controller" 00:21:42.733 } 00:21:42.733 EOF 00:21:42.733 )") 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.734 { 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme$subsystem", 00:21:42.734 "trtype": "$TEST_TRANSPORT", 00:21:42.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "$NVMF_PORT", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.734 "hdgst": ${hdgst:-false}, 00:21:42.734 "ddgst": ${ddgst:-false} 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 } 00:21:42.734 EOF 00:21:42.734 )") 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:42.734 { 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme$subsystem", 00:21:42.734 "trtype": "$TEST_TRANSPORT", 00:21:42.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "$NVMF_PORT", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.734 "hdgst": ${hdgst:-false}, 00:21:42.734 "ddgst": ${ddgst:-false} 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 } 00:21:42.734 EOF 00:21:42.734 )") 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:42.734 09:23:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme1", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme2", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme3", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme4", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme5", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme6", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme7", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme8", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme9", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 },{ 00:21:42.734 "params": { 00:21:42.734 "name": "Nvme10", 00:21:42.734 "trtype": "tcp", 00:21:42.734 "traddr": "10.0.0.2", 00:21:42.734 "adrfam": "ipv4", 00:21:42.734 "trsvcid": "4420", 00:21:42.734 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:42.734 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:42.734 "hdgst": false, 00:21:42.734 "ddgst": false 00:21:42.734 }, 00:21:42.734 "method": "bdev_nvme_attach_controller" 00:21:42.734 }' 00:21:42.993 [2024-11-19 09:23:43.810932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.993 [2024-11-19 09:23:43.854029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.372 Running I/O for 10 seconds... 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:44.631 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:44.890 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:44.890 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:44.890 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:44.890 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:44.890 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.890 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.149 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.149 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:45.149 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:45.149 09:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1172010 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1172010 ']' 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1172010 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1172010 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1172010' 00:21:45.424 killing process with pid 1172010 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1172010 00:21:45.424 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1172010 00:21:45.424 [2024-11-19 09:23:46.333065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.424 [2024-11-19 09:23:46.333183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.333520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39170 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.334968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.425 [2024-11-19 09:23:46.335192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.335410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3bbd0 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.337995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.426 [2024-11-19 09:23:46.338103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.338154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd39b10 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.339703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a000 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.427 [2024-11-19 09:23:46.340525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.340867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a380 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.341999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.428 [2024-11-19 09:23:46.342059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.342263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3a850 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.429 [2024-11-19 09:23:46.343610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.343771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3ad20 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.344872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.345624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8981b0 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.345745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.430 [2024-11-19 09:23:46.345802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.430 [2024-11-19 09:23:46.345809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3e30 is same with the state(6) to be set 00:21:45.430 [2024-11-19 09:23:46.345833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e150 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.345924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.345982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.345989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ac610 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.346013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6ca0 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.346096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895c70 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.346176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897d50 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.346260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9270 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.346344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.431 [2024-11-19 09:23:46.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc35c0 is same with the state(6) to be set 00:21:45.431 [2024-11-19 09:23:46.346503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.431 [2024-11-19 09:23:46.346632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.431 [2024-11-19 09:23:46.346640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.346985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.346993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.432 [2024-11-19 09:23:46.347253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.432 [2024-11-19 09:23:46.347260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.347506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.349245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:45.433 [2024-11-19 09:23:46.349277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x897d50 (9): Bad file descriptor 00:21:45.433 [2024-11-19 09:23:46.349739] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.433 [2024-11-19 09:23:46.350034] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.433 [2024-11-19 09:23:46.350079] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.433 [2024-11-19 09:23:46.350122] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.433 [2024-11-19 09:23:46.350164] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.433 [2024-11-19 09:23:46.350294] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.433 [2024-11-19 09:23:46.350420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.433 [2024-11-19 09:23:46.350437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x897d50 with addr=10.0.0.2, port=4420 00:21:45.433 [2024-11-19 09:23:46.350446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897d50 is same with the state(6) to be set 00:21:45.433 [2024-11-19 09:23:46.350498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.433 [2024-11-19 09:23:46.350719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.433 [2024-11-19 09:23:46.350726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.350966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.434 [2024-11-19 09:23:46.350974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.434 [2024-11-19 09:23:46.354211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.354420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b210 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.434 [2024-11-19 09:23:46.355339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.355577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd3b6e0 is same with the state(6) to be set 00:21:45.435 [2024-11-19 09:23:46.362381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.435 [2024-11-19 09:23:46.362735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.435 [2024-11-19 09:23:46.362742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.362937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbcf00 is same with the state(6) to be set 00:21:45.436 [2024-11-19 09:23:46.363146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x897d50 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8981b0 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc3e30 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e150 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.436 [2024-11-19 09:23:46.363255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.363264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.436 [2024-11-19 09:23:46.363271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.363279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.436 [2024-11-19 09:23:46.363287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.363295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.436 [2024-11-19 09:23:46.363302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.363309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8990 is same with the state(6) to be set 00:21:45.436 [2024-11-19 09:23:46.363325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ac610 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6ca0 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895c70 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9270 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc35c0 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.363406] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:45.436 [2024-11-19 09:23:46.364806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:45.436 [2024-11-19 09:23:46.364845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:45.436 [2024-11-19 09:23:46.364856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:45.436 [2024-11-19 09:23:46.364867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:45.436 [2024-11-19 09:23:46.364878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:45.436 [2024-11-19 09:23:46.365212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.436 [2024-11-19 09:23:46.365234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x895c70 with addr=10.0.0.2, port=4420 00:21:45.436 [2024-11-19 09:23:46.365246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895c70 is same with the state(6) to be set 00:21:45.436 [2024-11-19 09:23:46.365677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895c70 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.365753] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.436 [2024-11-19 09:23:46.365804] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:45.436 [2024-11-19 09:23:46.365823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:45.436 [2024-11-19 09:23:46.365833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:45.436 [2024-11-19 09:23:46.365844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:45.436 [2024-11-19 09:23:46.365854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:45.436 [2024-11-19 09:23:46.373218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf8990 (9): Bad file descriptor 00:21:45.436 [2024-11-19 09:23:46.373401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.436 [2024-11-19 09:23:46.373647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.436 [2024-11-19 09:23:46.373657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.373989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.373999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.437 [2024-11-19 09:23:46.374453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.437 [2024-11-19 09:23:46.374461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.374675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.374683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9c4e0 is same with the state(6) to be set 00:21:45.438 [2024-11-19 09:23:46.375700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.375988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.375995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.376005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.376012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.376022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.376029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.376038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.376046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.376054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.438 [2024-11-19 09:23:46.376062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.438 [2024-11-19 09:23:46.376071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.439 [2024-11-19 09:23:46.376710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.439 [2024-11-19 09:23:46.376717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.376726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.376734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.376742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.376749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.376759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.376765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.376773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbe1c0 is same with the state(6) to be set 00:21:45.440 [2024-11-19 09:23:46.377769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.377989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.377997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.440 [2024-11-19 09:23:46.378358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.440 [2024-11-19 09:23:46.378366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.378844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.378852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdbf6e0 is same with the state(6) to be set 00:21:45.441 [2024-11-19 09:23:46.379859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.379876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.379887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.379895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.379905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.379912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.379921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.441 [2024-11-19 09:23:46.379929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.441 [2024-11-19 09:23:46.379938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.379945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.379959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.379966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.379976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.379982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.379992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.379999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.442 [2024-11-19 09:23:46.380482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.442 [2024-11-19 09:23:46.380489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.380906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.380913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d7b0 is same with the state(6) to be set 00:21:45.443 [2024-11-19 09:23:46.381925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.381943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.381959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.381966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.381976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.381983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.381994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.382001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.382011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.382027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.382034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.382043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.443 [2024-11-19 09:23:46.382049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.443 [2024-11-19 09:23:46.382059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.444 [2024-11-19 09:23:46.382661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.444 [2024-11-19 09:23:46.382668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.382983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.382991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ecf0 is same with the state(6) to be set 00:21:45.445 [2024-11-19 09:23:46.383974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.383989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.445 [2024-11-19 09:23:46.384183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.445 [2024-11-19 09:23:46.384191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.446 [2024-11-19 09:23:46.384796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.446 [2024-11-19 09:23:46.384805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.384987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.384994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.385003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.385010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.385020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.385027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.385035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be75f0 is same with the state(6) to be set 00:21:45.447 [2024-11-19 09:23:46.386044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.447 [2024-11-19 09:23:46.386277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.447 [2024-11-19 09:23:46.386285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.448 [2024-11-19 09:23:46.386879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.448 [2024-11-19 09:23:46.386887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.386903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.386918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.386936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.386956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.386973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.386992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.386999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.387015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.387032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.387048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.387064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.387080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.387097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.387104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb6550 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.388082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.388098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.388108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.388117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.388177] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:45.449 [2024-11-19 09:23:46.388195] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:45.449 [2024-11-19 09:23:46.388208] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:45.449 [2024-11-19 09:23:46.388222] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:45.449 [2024-11-19 09:23:46.401413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.401443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.401460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.401469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:45.449 [2024-11-19 09:23:46.401756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.401777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x897d50 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.401788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897d50 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.401972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.401985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8981b0 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.401993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8981b0 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.402235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.402247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc3e30 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.402254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3e30 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.402447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.402460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6ca0 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.402468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6ca0 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.402491] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:45.449 [2024-11-19 09:23:46.402504] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:45.449 [2024-11-19 09:23:46.402523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6ca0 (9): Bad file descriptor 00:21:45.449 [2024-11-19 09:23:46.402538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc3e30 (9): Bad file descriptor 00:21:45.449 [2024-11-19 09:23:46.402550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8981b0 (9): Bad file descriptor 00:21:45.449 [2024-11-19 09:23:46.402562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x897d50 (9): Bad file descriptor 00:21:45.449 task offset: 32768 on job bdev=Nvme2n1 fails 00:21:45.449 2081.77 IOPS, 130.11 MiB/s [2024-11-19T08:23:46.508Z] [2024-11-19 09:23:46.404298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.404316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc35c0 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.404324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc35c0 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.404522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.404535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ac610 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.404543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ac610 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.404673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.404684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9270 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.404696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9270 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.404861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.449 [2024-11-19 09:23:46.404872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e150 with addr=10.0.0.2, port=4420 00:21:45.449 [2024-11-19 09:23:46.404880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e150 is same with the state(6) to be set 00:21:45.449 [2024-11-19 09:23:46.404980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.404994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.405011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.405018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.405028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.405036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.405045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.405053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.449 [2024-11-19 09:23:46.405062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.449 [2024-11-19 09:23:46.405069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.450 [2024-11-19 09:23:46.405688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.450 [2024-11-19 09:23:46.405696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.405980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.405994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.406002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.406011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.406017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.406027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.451 [2024-11-19 09:23:46.406034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.451 [2024-11-19 09:23:46.406042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb5070 is same with the state(6) to be set 00:21:45.451 [2024-11-19 09:23:46.407044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:45.451 00:21:45.451 Latency(us) 00:21:45.451 [2024-11-19T08:23:46.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.451 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme1n1 ended in about 0.99 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme1n1 : 0.99 196.98 12.31 64.65 0.00 242183.52 17894.18 217009.64 00:21:45.451 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme2n1 ended in about 0.96 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme2n1 : 0.96 265.75 16.61 66.44 0.00 187384.32 3348.03 219745.06 00:21:45.451 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme3n1 ended in about 0.98 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme3n1 : 0.98 271.78 16.99 65.39 0.00 181577.46 14531.90 218833.25 00:21:45.451 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme4n1 ended in about 0.99 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme4n1 : 0.99 193.54 12.10 64.51 0.00 233515.07 14474.91 221568.67 00:21:45.451 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme5n1 ended in about 0.99 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme5n1 : 0.99 193.14 12.07 64.38 0.00 229995.52 18350.08 217009.64 00:21:45.451 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme6n1 ended in about 1.00 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme6n1 : 1.00 192.74 12.05 64.25 0.00 226553.77 18464.06 217009.64 00:21:45.451 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme7n1 ended in about 1.00 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme7n1 : 1.00 192.34 12.02 64.11 0.00 223087.30 16526.47 227039.50 00:21:45.451 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme8n1 ended in about 1.00 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme8n1 : 1.00 191.95 12.00 63.98 0.00 219649.11 14816.83 219745.06 00:21:45.451 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme9n1 ended in about 1.02 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme9n1 : 1.02 188.01 11.75 62.67 0.00 220989.44 21313.45 224304.08 00:21:45.451 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:45.451 Job: Nvme10n1 ended in about 1.00 seconds with error 00:21:45.451 Verification LBA range: start 0x0 length 0x400 00:21:45.451 Nvme10n1 : 1.00 191.55 11.97 63.85 0.00 212327.74 17552.25 238892.97 00:21:45.451 [2024-11-19T08:23:46.510Z] =================================================================================================================== 00:21:45.451 [2024-11-19T08:23:46.510Z] Total : 2077.78 129.86 644.23 0.00 216044.16 3348.03 238892.97 00:21:45.451 [2024-11-19 09:23:46.438508] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:45.451 [2024-11-19 09:23:46.438558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:45.451 [2024-11-19 09:23:46.438613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc35c0 (9): Bad file descriptor 00:21:45.451 [2024-11-19 09:23:46.438629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ac610 (9): Bad file descriptor 00:21:45.451 [2024-11-19 09:23:46.438638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9270 (9): Bad file descriptor 00:21:45.451 [2024-11-19 09:23:46.438648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e150 (9): Bad file descriptor 00:21:45.451 [2024-11-19 09:23:46.438656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:45.451 [2024-11-19 09:23:46.438664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:45.451 [2024-11-19 09:23:46.438673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.438681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.438690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.438696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.438704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.438710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.438718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.438724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.438730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.438738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.438746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.438752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.438758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.438764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.439180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.439202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x895c70 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.439213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895c70 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.439412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.439430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf8990 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.439439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf8990 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.439447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.439453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.439461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.439469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.439478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.439484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.439492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.439499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.439507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.439520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.439528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.439534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.439541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.439548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.439556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.439562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.439906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895c70 (9): Bad file descriptor 00:21:45.452 [2024-11-19 09:23:46.439922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf8990 (9): Bad file descriptor 00:21:45.452 [2024-11-19 09:23:46.440190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.440292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.440300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.440311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.440320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:45.452 [2024-11-19 09:23:46.440327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:45.452 [2024-11-19 09:23:46.440334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:45.452 [2024-11-19 09:23:46.440342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:45.452 [2024-11-19 09:23:46.440369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:45.452 [2024-11-19 09:23:46.440601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.440617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6ca0 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.440627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc6ca0 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.440829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.440840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc3e30 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.440848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc3e30 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.440988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.441000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8981b0 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.441009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8981b0 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.441148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.441161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x897d50 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.441168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897d50 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.441224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.441236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0e150 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.441243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e150 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.441382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.441394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc9270 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.441402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc9270 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.441639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.441652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ac610 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.441661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ac610 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.441808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.452 [2024-11-19 09:23:46.441820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc35c0 with addr=10.0.0.2, port=4420 00:21:45.452 [2024-11-19 09:23:46.441832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc35c0 is same with the state(6) to be set 00:21:45.452 [2024-11-19 09:23:46.441843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc6ca0 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc3e30 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8981b0 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x897d50 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0e150 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc9270 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ac610 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc35c0 (9): Bad file descriptor 00:21:45.453 [2024-11-19 09:23:46.441936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.441943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.441967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.441975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.441982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.441989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.441996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.442010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.442018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.442025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.442039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.442045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.442052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.442066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.442073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.442080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.442096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.442104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.442110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.442141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.442149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.442156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:45.453 [2024-11-19 09:23:46.442170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:45.453 [2024-11-19 09:23:46.442177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:45.453 [2024-11-19 09:23:46.442184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:45.453 [2024-11-19 09:23:46.442191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:45.712 09:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1172156 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1172156 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1172156 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:47.091 rmmod nvme_tcp 00:21:47.091 rmmod nvme_fabrics 00:21:47.091 rmmod nvme_keyring 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1172010 ']' 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1172010 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1172010 ']' 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1172010 00:21:47.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1172010) - No such process 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1172010 is not found' 00:21:47.091 Process with pid 1172010 is not found 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.091 09:23:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.000 00:21:49.000 real 0m7.383s 00:21:49.000 user 0m17.642s 00:21:49.000 sys 0m1.333s 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.000 ************************************ 00:21:49.000 END TEST nvmf_shutdown_tc3 00:21:49.000 ************************************ 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:49.000 ************************************ 00:21:49.000 START TEST nvmf_shutdown_tc4 00:21:49.000 ************************************ 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.000 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:49.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:49.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.001 09:23:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:49.001 Found net devices under 0000:86:00.0: cvl_0_0 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:49.001 Found net devices under 0000:86:00.1: cvl_0_1 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.001 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:21:49.261 00:21:49.261 --- 10.0.0.2 ping statistics --- 00:21:49.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.261 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:49.261 00:21:49.261 --- 10.0.0.1 ping statistics --- 00:21:49.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.261 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:49.261 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1173423 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1173423 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1173423 ']' 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:49.262 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.521 [2024-11-19 09:23:50.340542] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:21:49.521 [2024-11-19 09:23:50.340586] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.521 [2024-11-19 09:23:50.417583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.521 [2024-11-19 09:23:50.458658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.521 [2024-11-19 09:23:50.458699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.521 [2024-11-19 09:23:50.458707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.521 [2024-11-19 09:23:50.458713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.521 [2024-11-19 09:23:50.458718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.521 [2024-11-19 09:23:50.460186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.521 [2024-11-19 09:23:50.460224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.521 [2024-11-19 09:23:50.460330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.521 [2024-11-19 09:23:50.460331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.521 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:49.521 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:21:49.521 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.521 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.521 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.781 [2024-11-19 09:23:50.609625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.781 09:23:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.781 Malloc1 00:21:49.781 [2024-11-19 09:23:50.723433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.781 Malloc2 00:21:49.781 Malloc3 00:21:49.781 Malloc4 00:21:50.041 Malloc5 00:21:50.041 Malloc6 00:21:50.041 Malloc7 00:21:50.041 Malloc8 00:21:50.041 Malloc9 00:21:50.041 Malloc10 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1173486 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:50.299 09:23:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:50.299 [2024-11-19 09:23:51.230039] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1173423 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1173423 ']' 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1173423 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1173423 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1173423' 00:21:55.577 killing process with pid 1173423 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1173423 00:21:55.577 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1173423 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 starting I/O failed: -6 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 starting I/O failed: -6 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 starting I/O failed: -6 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 starting I/O failed: -6 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 starting I/O failed: -6 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.577 [2024-11-19 09:23:56.221193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.577 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 [2024-11-19 09:23:56.221244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 [2024-11-19 09:23:56.221259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.578 starting I/O failed: -6 00:21:55.578 [2024-11-19 09:23:56.221266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with Write completed with error (sct=0, sc=8) 00:21:55.578 the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1179dc0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 [2024-11-19 09:23:56.221501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 [2024-11-19 09:23:56.221815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with starting I/O failed: -6 00:21:55.578 the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with Write completed with error (sct=0, sc=8) 00:21:55.578 the state(6) to be set 00:21:55.578 starting I/O failed: -6 00:21:55.578 [2024-11-19 09:23:56.221851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 [2024-11-19 09:23:56.221865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 [2024-11-19 09:23:56.221879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 [2024-11-19 09:23:56.221893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with starting I/O failed: -6 00:21:55.578 the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.221906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a2b0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 [2024-11-19 09:23:56.222339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.578 [2024-11-19 09:23:56.222337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a7a0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.222361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a7a0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.222370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a7a0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.222377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a7a0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.222383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a7a0 is same with the state(6) to be set 00:21:55.578 [2024-11-19 09:23:56.222389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117a7a0 is same with the state(6) to be set 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.578 Write completed with error (sct=0, sc=8) 00:21:55.578 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 [2024-11-19 09:23:56.224786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.579 NVMe io qpair process completion error 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 [2024-11-19 09:23:56.225817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 Write completed with error (sct=0, sc=8) 00:21:55.579 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 [2024-11-19 09:23:56.226754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 [2024-11-19 09:23:56.227763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.580 starting I/O failed: -6 00:21:55.580 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 [2024-11-19 09:23:56.229704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.581 NVMe io qpair process completion error 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 [2024-11-19 09:23:56.230740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 [2024-11-19 09:23:56.231553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.581 starting I/O failed: -6 00:21:55.581 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 [2024-11-19 09:23:56.232599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 [2024-11-19 09:23:56.234546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.582 NVMe io qpair process completion error 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 starting I/O failed: -6 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.582 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 [2024-11-19 09:23:56.235508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 [2024-11-19 09:23:56.236433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 [2024-11-19 09:23:56.237471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.583 Write completed with error (sct=0, sc=8) 00:21:55.583 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 [2024-11-19 09:23:56.239422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.584 NVMe io qpair process completion error 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 [2024-11-19 09:23:56.240518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 starting I/O failed: -6 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.584 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 [2024-11-19 09:23:56.241357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 [2024-11-19 09:23:56.242392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 Write completed with error (sct=0, sc=8) 00:21:55.585 starting I/O failed: -6 00:21:55.585 [2024-11-19 09:23:56.244252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.586 NVMe io qpair process completion error 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 [2024-11-19 09:23:56.245159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 [2024-11-19 09:23:56.246062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 [2024-11-19 09:23:56.247077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.586 Write completed with error (sct=0, sc=8) 00:21:55.586 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 [2024-11-19 09:23:56.250723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.587 NVMe io qpair process completion error 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 starting I/O failed: -6 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.587 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 [2024-11-19 09:23:56.251818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.588 starting I/O failed: -6 00:21:55.588 starting I/O failed: -6 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 [2024-11-19 09:23:56.252759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 starting I/O failed: -6 00:21:55.588 Write completed with error (sct=0, sc=8) 00:21:55.588 [2024-11-19 09:23:56.253788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 [2024-11-19 09:23:56.255508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.589 NVMe io qpair process completion error 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 [2024-11-19 09:23:56.256479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.589 starting I/O failed: -6 00:21:55.589 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 [2024-11-19 09:23:56.257456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 [2024-11-19 09:23:56.258459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.590 starting I/O failed: -6 00:21:55.590 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 [2024-11-19 09:23:56.260446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.591 NVMe io qpair process completion error 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 [2024-11-19 09:23:56.261469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 [2024-11-19 09:23:56.262403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 Write completed with error (sct=0, sc=8) 00:21:55.591 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 [2024-11-19 09:23:56.263454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 [2024-11-19 09:23:56.269825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.592 NVMe io qpair process completion error 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 [2024-11-19 09:23:56.270779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.592 starting I/O failed: -6 00:21:55.592 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 [2024-11-19 09:23:56.271695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 [2024-11-19 09:23:56.272724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.593 starting I/O failed: -6 00:21:55.593 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 Write completed with error (sct=0, sc=8) 00:21:55.594 starting I/O failed: -6 00:21:55.594 [2024-11-19 09:23:56.276931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.594 NVMe io qpair process completion error 00:21:55.594 Initializing NVMe Controllers 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:55.594 Controller IO queue size 128, less than required. 00:21:55.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:55.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:55.594 Initialization complete. Launching workers. 00:21:55.594 ======================================================== 00:21:55.594 Latency(us) 00:21:55.594 Device Information : IOPS MiB/s Average min max 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2194.23 94.28 58341.55 692.70 111807.96 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2139.65 91.94 59841.62 988.72 112903.99 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2125.06 91.31 60269.62 723.54 115593.68 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2127.17 91.40 60280.31 943.07 122458.02 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2159.54 92.79 58710.80 689.54 107713.40 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2159.96 92.81 58708.57 709.98 106627.16 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2127.81 91.43 59610.32 808.35 106503.52 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2174.13 93.42 58354.12 924.99 105696.44 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2174.55 93.44 58358.86 785.23 104089.35 00:21:55.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2159.75 92.80 58770.86 925.65 104679.91 00:21:55.594 ======================================================== 00:21:55.594 Total : 21541.84 925.63 59117.39 689.54 122458.02 00:21:55.594 00:21:55.594 [2024-11-19 09:23:56.280020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4bbc0 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d900 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4c410 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4bef0 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d720 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4dae0 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4b890 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4ca70 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4c740 is same with the state(6) to be set 00:21:55.594 [2024-11-19 09:23:56.280317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4b560 is same with the state(6) to be set 00:21:55.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:55.594 09:23:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1173486 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1173486 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1173486 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.975 rmmod nvme_tcp 00:21:56.975 rmmod nvme_fabrics 00:21:56.975 rmmod nvme_keyring 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1173423 ']' 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1173423 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1173423 ']' 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1173423 00:21:56.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1173423) - No such process 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1173423 is not found' 00:21:56.975 Process with pid 1173423 is not found 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.975 09:23:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:58.896 00:21:58.896 real 0m9.773s 00:21:58.896 user 0m24.948s 00:21:58.896 sys 0m5.143s 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:58.896 ************************************ 00:21:58.896 END TEST nvmf_shutdown_tc4 00:21:58.896 ************************************ 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:58.896 00:21:58.896 real 0m39.932s 00:21:58.896 user 1m36.380s 00:21:58.896 sys 0m13.979s 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:58.896 ************************************ 00:21:58.896 END TEST nvmf_shutdown 00:21:58.896 ************************************ 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:58.896 ************************************ 00:21:58.896 START TEST nvmf_nsid 00:21:58.896 ************************************ 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:58.896 * Looking for test storage... 00:21:58.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:58.896 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.156 09:23:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.156 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.157 --rc genhtml_branch_coverage=1 00:21:59.157 --rc genhtml_function_coverage=1 00:21:59.157 --rc genhtml_legend=1 00:21:59.157 --rc geninfo_all_blocks=1 00:21:59.157 --rc geninfo_unexecuted_blocks=1 00:21:59.157 00:21:59.157 ' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.157 --rc genhtml_branch_coverage=1 00:21:59.157 --rc genhtml_function_coverage=1 00:21:59.157 --rc genhtml_legend=1 00:21:59.157 --rc geninfo_all_blocks=1 00:21:59.157 --rc geninfo_unexecuted_blocks=1 00:21:59.157 00:21:59.157 ' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.157 --rc genhtml_branch_coverage=1 00:21:59.157 --rc genhtml_function_coverage=1 00:21:59.157 --rc genhtml_legend=1 00:21:59.157 --rc geninfo_all_blocks=1 00:21:59.157 --rc geninfo_unexecuted_blocks=1 00:21:59.157 00:21:59.157 ' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.157 --rc genhtml_branch_coverage=1 00:21:59.157 --rc genhtml_function_coverage=1 00:21:59.157 --rc genhtml_legend=1 00:21:59.157 --rc geninfo_all_blocks=1 00:21:59.157 --rc geninfo_unexecuted_blocks=1 00:21:59.157 00:21:59.157 ' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.157 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:59.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.158 09:24:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.724 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.724 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.724 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.724 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:05.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:22:05.724 00:22:05.724 --- 10.0.0.2 ping statistics --- 00:22:05.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.724 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:05.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:22:05.724 00:22:05.724 --- 10.0.0.1 ping statistics --- 00:22:05.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.724 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:05.724 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1178443 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1178443 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1178443 ']' 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.725 09:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.725 [2024-11-19 09:24:06.047281] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:05.725 [2024-11-19 09:24:06.047328] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.725 [2024-11-19 09:24:06.127661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.725 [2024-11-19 09:24:06.170825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.725 [2024-11-19 09:24:06.170860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.725 [2024-11-19 09:24:06.170868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.725 [2024-11-19 09:24:06.170874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.725 [2024-11-19 09:24:06.170882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.725 [2024-11-19 09:24:06.171449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1178572 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=a90baa13-b04e-457f-9f33-1d6af5088fe1 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2d1fda5d-dfab-4c7c-9b58-c799fbb052f9 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9a287372-2c9d-4534-9730-b6de2bd6c526 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.725 null0 00:22:05.725 null1 00:22:05.725 [2024-11-19 09:24:06.353419] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:05.725 [2024-11-19 09:24:06.353465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178572 ] 00:22:05.725 null2 00:22:05.725 [2024-11-19 09:24:06.358862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.725 [2024-11-19 09:24:06.383046] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1178572 /var/tmp/tgt2.sock 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1178572 ']' 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:05.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.725 [2024-11-19 09:24:06.430604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.725 [2024-11-19 09:24:06.477118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:22:05.725 09:24:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:05.984 [2024-11-19 09:24:07.016321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.984 [2024-11-19 09:24:07.032432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:06.242 nvme0n1 nvme0n2 00:22:06.242 nvme1n1 00:22:06.242 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:06.242 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:06.242 09:24:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.178 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:22:07.179 09:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:22:08.114 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:08.114 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:22:08.114 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:08.114 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:22:08.114 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid a90baa13-b04e-457f-9f33-1d6af5088fe1 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a90baa13b04e457f9f331d6af5088fe1 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A90BAA13B04E457F9F331D6AF5088FE1 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ A90BAA13B04E457F9F331D6AF5088FE1 == \A\9\0\B\A\A\1\3\B\0\4\E\4\5\7\F\9\F\3\3\1\D\6\A\F\5\0\8\8\F\E\1 ]] 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2d1fda5d-dfab-4c7c-9b58-c799fbb052f9 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2d1fda5ddfab4c7c9b58c799fbb052f9 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2D1FDA5DDFAB4C7C9B58C799FBB052F9 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2D1FDA5DDFAB4C7C9B58C799FBB052F9 == \2\D\1\F\D\A\5\D\D\F\A\B\4\C\7\C\9\B\5\8\C\7\9\9\F\B\B\0\5\2\F\9 ]] 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:22:08.372 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9a287372-2c9d-4534-9730-b6de2bd6c526 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9a2873722c9d45349730b6de2bd6c526 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9A2873722C9D45349730B6DE2BD6C526 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9A2873722C9D45349730B6DE2BD6C526 == \9\A\2\8\7\3\7\2\2\C\9\D\4\5\3\4\9\7\3\0\B\6\D\E\2\B\D\6\C\5\2\6 ]] 00:22:08.373 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1178572 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1178572 ']' 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1178572 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1178572 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:08.631 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1178572' 00:22:08.632 killing process with pid 1178572 00:22:08.632 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1178572 00:22:08.632 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1178572 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.891 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.891 rmmod nvme_tcp 00:22:08.891 rmmod nvme_fabrics 00:22:09.150 rmmod nvme_keyring 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1178443 ']' 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1178443 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1178443 ']' 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1178443 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:09.150 09:24:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1178443 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1178443' 00:22:09.150 killing process with pid 1178443 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1178443 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1178443 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:09.150 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.409 09:24:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.314 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.314 00:22:11.314 real 0m12.415s 00:22:11.314 user 0m9.750s 00:22:11.314 sys 0m5.485s 00:22:11.314 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:11.314 09:24:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:11.314 ************************************ 00:22:11.314 END TEST nvmf_nsid 00:22:11.314 ************************************ 00:22:11.314 09:24:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:11.314 00:22:11.314 real 11m57.996s 00:22:11.314 user 25m38.315s 00:22:11.314 sys 3m44.527s 00:22:11.314 09:24:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:11.314 09:24:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:11.314 ************************************ 00:22:11.314 END TEST nvmf_target_extra 00:22:11.314 ************************************ 00:22:11.314 09:24:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:11.314 09:24:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:11.314 09:24:12 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:11.314 09:24:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.574 ************************************ 00:22:11.574 START TEST nvmf_host 00:22:11.574 ************************************ 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:11.574 * Looking for test storage... 00:22:11.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.574 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.575 --rc genhtml_branch_coverage=1 00:22:11.575 --rc genhtml_function_coverage=1 00:22:11.575 --rc genhtml_legend=1 00:22:11.575 --rc geninfo_all_blocks=1 00:22:11.575 --rc geninfo_unexecuted_blocks=1 00:22:11.575 00:22:11.575 ' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.575 --rc genhtml_branch_coverage=1 00:22:11.575 --rc genhtml_function_coverage=1 00:22:11.575 --rc genhtml_legend=1 00:22:11.575 --rc geninfo_all_blocks=1 00:22:11.575 --rc geninfo_unexecuted_blocks=1 00:22:11.575 00:22:11.575 ' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.575 --rc genhtml_branch_coverage=1 00:22:11.575 --rc genhtml_function_coverage=1 00:22:11.575 --rc genhtml_legend=1 00:22:11.575 --rc geninfo_all_blocks=1 00:22:11.575 --rc geninfo_unexecuted_blocks=1 00:22:11.575 00:22:11.575 ' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.575 --rc genhtml_branch_coverage=1 00:22:11.575 --rc genhtml_function_coverage=1 00:22:11.575 --rc genhtml_legend=1 00:22:11.575 --rc geninfo_all_blocks=1 00:22:11.575 --rc geninfo_unexecuted_blocks=1 00:22:11.575 00:22:11.575 ' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.575 ************************************ 00:22:11.575 START TEST nvmf_multicontroller 00:22:11.575 ************************************ 00:22:11.575 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:11.835 * Looking for test storage... 00:22:11.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.835 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:11.835 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:11.835 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:11.835 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:11.835 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.835 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:11.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.836 --rc genhtml_branch_coverage=1 00:22:11.836 --rc genhtml_function_coverage=1 00:22:11.836 --rc genhtml_legend=1 00:22:11.836 --rc geninfo_all_blocks=1 00:22:11.836 --rc geninfo_unexecuted_blocks=1 00:22:11.836 00:22:11.836 ' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:11.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.836 --rc genhtml_branch_coverage=1 00:22:11.836 --rc genhtml_function_coverage=1 00:22:11.836 --rc genhtml_legend=1 00:22:11.836 --rc geninfo_all_blocks=1 00:22:11.836 --rc geninfo_unexecuted_blocks=1 00:22:11.836 00:22:11.836 ' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:11.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.836 --rc genhtml_branch_coverage=1 00:22:11.836 --rc genhtml_function_coverage=1 00:22:11.836 --rc genhtml_legend=1 00:22:11.836 --rc geninfo_all_blocks=1 00:22:11.836 --rc geninfo_unexecuted_blocks=1 00:22:11.836 00:22:11.836 ' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:11.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.836 --rc genhtml_branch_coverage=1 00:22:11.836 --rc genhtml_function_coverage=1 00:22:11.836 --rc genhtml_legend=1 00:22:11.836 --rc geninfo_all_blocks=1 00:22:11.836 --rc geninfo_unexecuted_blocks=1 00:22:11.836 00:22:11.836 ' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:11.836 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.837 09:24:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.413 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.414 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.414 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:18.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:22:18.414 00:22:18.414 --- 10.0.0.2 ping statistics --- 00:22:18.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.414 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:22:18.414 00:22:18.414 --- 10.0.0.1 ping statistics --- 00:22:18.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.414 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:18.414 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1182842 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1182842 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1182842 ']' 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:18.415 09:24:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 [2024-11-19 09:24:18.793021] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:18.415 [2024-11-19 09:24:18.793075] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.415 [2024-11-19 09:24:18.875339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:18.415 [2024-11-19 09:24:18.918880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.415 [2024-11-19 09:24:18.918918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.415 [2024-11-19 09:24:18.918925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.415 [2024-11-19 09:24:18.918931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.415 [2024-11-19 09:24:18.918936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.415 [2024-11-19 09:24:18.920384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.415 [2024-11-19 09:24:18.920491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.415 [2024-11-19 09:24:18.920492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 [2024-11-19 09:24:19.057360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 Malloc0 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 [2024-11-19 09:24:19.130071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 [2024-11-19 09:24:19.142032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 Malloc1 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1183019 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1183019 /var/tmp/bdevperf.sock 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1183019 ']' 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:18.415 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:22:18.416 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:18.416 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.416 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.675 NVMe0n1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.675 1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.675 request: 00:22:18.675 { 00:22:18.675 "name": "NVMe0", 00:22:18.675 "trtype": "tcp", 00:22:18.675 "traddr": "10.0.0.2", 00:22:18.675 "adrfam": "ipv4", 00:22:18.675 "trsvcid": "4420", 00:22:18.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.675 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:18.675 "hostaddr": "10.0.0.1", 00:22:18.675 "prchk_reftag": false, 00:22:18.675 "prchk_guard": false, 00:22:18.675 "hdgst": false, 00:22:18.675 "ddgst": false, 00:22:18.675 "allow_unrecognized_csi": false, 00:22:18.675 "method": "bdev_nvme_attach_controller", 00:22:18.675 "req_id": 1 00:22:18.675 } 00:22:18.675 Got JSON-RPC error response 00:22:18.675 response: 00:22:18.675 { 00:22:18.675 "code": -114, 00:22:18.675 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:18.675 } 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.675 request: 00:22:18.675 { 00:22:18.675 "name": "NVMe0", 00:22:18.675 "trtype": "tcp", 00:22:18.675 "traddr": "10.0.0.2", 00:22:18.675 "adrfam": "ipv4", 00:22:18.675 "trsvcid": "4420", 00:22:18.675 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.675 "hostaddr": "10.0.0.1", 00:22:18.675 "prchk_reftag": false, 00:22:18.675 "prchk_guard": false, 00:22:18.675 "hdgst": false, 00:22:18.675 "ddgst": false, 00:22:18.675 "allow_unrecognized_csi": false, 00:22:18.675 "method": "bdev_nvme_attach_controller", 00:22:18.675 "req_id": 1 00:22:18.675 } 00:22:18.675 Got JSON-RPC error response 00:22:18.675 response: 00:22:18.675 { 00:22:18.675 "code": -114, 00:22:18.675 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:18.675 } 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:18.675 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.676 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.935 request: 00:22:18.935 { 00:22:18.935 "name": "NVMe0", 00:22:18.935 "trtype": "tcp", 00:22:18.935 "traddr": "10.0.0.2", 00:22:18.935 "adrfam": "ipv4", 00:22:18.935 "trsvcid": "4420", 00:22:18.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.935 "hostaddr": "10.0.0.1", 00:22:18.935 "prchk_reftag": false, 00:22:18.935 "prchk_guard": false, 00:22:18.935 "hdgst": false, 00:22:18.935 "ddgst": false, 00:22:18.935 "multipath": "disable", 00:22:18.935 "allow_unrecognized_csi": false, 00:22:18.935 "method": "bdev_nvme_attach_controller", 00:22:18.935 "req_id": 1 00:22:18.935 } 00:22:18.935 Got JSON-RPC error response 00:22:18.935 response: 00:22:18.935 { 00:22:18.935 "code": -114, 00:22:18.935 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:18.935 } 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.935 request: 00:22:18.935 { 00:22:18.935 "name": "NVMe0", 00:22:18.935 "trtype": "tcp", 00:22:18.935 "traddr": "10.0.0.2", 00:22:18.935 "adrfam": "ipv4", 00:22:18.935 "trsvcid": "4420", 00:22:18.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.935 "hostaddr": "10.0.0.1", 00:22:18.935 "prchk_reftag": false, 00:22:18.935 "prchk_guard": false, 00:22:18.935 "hdgst": false, 00:22:18.935 "ddgst": false, 00:22:18.935 "multipath": "failover", 00:22:18.935 "allow_unrecognized_csi": false, 00:22:18.935 "method": "bdev_nvme_attach_controller", 00:22:18.935 "req_id": 1 00:22:18.935 } 00:22:18.935 Got JSON-RPC error response 00:22:18.935 response: 00:22:18.935 { 00:22:18.935 "code": -114, 00:22:18.935 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:18.935 } 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.935 NVMe0n1 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.935 09:24:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.195 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:19.195 09:24:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.572 { 00:22:20.572 "results": [ 00:22:20.572 { 00:22:20.572 "job": "NVMe0n1", 00:22:20.572 "core_mask": "0x1", 00:22:20.572 "workload": "write", 00:22:20.572 "status": "finished", 00:22:20.572 "queue_depth": 128, 00:22:20.572 "io_size": 4096, 00:22:20.572 "runtime": 1.005086, 00:22:20.572 "iops": 24393.93245951093, 00:22:20.572 "mibps": 95.28879866996456, 00:22:20.572 "io_failed": 0, 00:22:20.572 "io_timeout": 0, 00:22:20.572 "avg_latency_us": 5236.640346719535, 00:22:20.572 "min_latency_us": 3162.824347826087, 00:22:20.572 "max_latency_us": 10656.72347826087 00:22:20.572 } 00:22:20.572 ], 00:22:20.572 "core_count": 1 00:22:20.572 } 00:22:20.572 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:20.572 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.572 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.572 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.572 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1183019 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1183019 ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1183019 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1183019 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1183019' 00:22:20.573 killing process with pid 1183019 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1183019 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1183019 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:20.573 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:20.573 [2024-11-19 09:24:19.246918] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:20.573 [2024-11-19 09:24:19.246972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183019 ] 00:22:20.573 [2024-11-19 09:24:19.324153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.573 [2024-11-19 09:24:19.366387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.573 [2024-11-19 09:24:20.069065] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name d2466494-105a-4570-9138-6de2a5d19783 already exists 00:22:20.573 [2024-11-19 09:24:20.069099] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:d2466494-105a-4570-9138-6de2a5d19783 alias for bdev NVMe1n1 00:22:20.573 [2024-11-19 09:24:20.069109] bdev_nvme.c:4656:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:20.573 Running I/O for 1 seconds... 00:22:20.573 24326.00 IOPS, 95.02 MiB/s 00:22:20.573 Latency(us) 00:22:20.573 [2024-11-19T08:24:21.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.573 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:20.573 NVMe0n1 : 1.01 24393.93 95.29 0.00 0.00 5236.64 3162.82 10656.72 00:22:20.573 [2024-11-19T08:24:21.632Z] =================================================================================================================== 00:22:20.573 [2024-11-19T08:24:21.632Z] Total : 24393.93 95.29 0.00 0.00 5236.64 3162.82 10656.72 00:22:20.573 Received shutdown signal, test time was about 1.000000 seconds 00:22:20.573 00:22:20.573 Latency(us) 00:22:20.573 [2024-11-19T08:24:21.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.573 [2024-11-19T08:24:21.632Z] =================================================================================================================== 00:22:20.573 [2024-11-19T08:24:21.632Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.573 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.573 rmmod nvme_tcp 00:22:20.573 rmmod nvme_fabrics 00:22:20.573 rmmod nvme_keyring 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1182842 ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1182842 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1182842 ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1182842 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1182842 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:20.573 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1182842' 00:22:20.574 killing process with pid 1182842 00:22:20.574 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1182842 00:22:20.574 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1182842 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.833 09:24:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.369 00:22:23.369 real 0m11.259s 00:22:23.369 user 0m12.687s 00:22:23.369 sys 0m5.195s 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:23.369 ************************************ 00:22:23.369 END TEST nvmf_multicontroller 00:22:23.369 ************************************ 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.369 ************************************ 00:22:23.369 START TEST nvmf_aer 00:22:23.369 ************************************ 00:22:23.369 09:24:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:23.369 * Looking for test storage... 00:22:23.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:23.369 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.370 --rc genhtml_branch_coverage=1 00:22:23.370 --rc genhtml_function_coverage=1 00:22:23.370 --rc genhtml_legend=1 00:22:23.370 --rc geninfo_all_blocks=1 00:22:23.370 --rc geninfo_unexecuted_blocks=1 00:22:23.370 00:22:23.370 ' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.370 --rc genhtml_branch_coverage=1 00:22:23.370 --rc genhtml_function_coverage=1 00:22:23.370 --rc genhtml_legend=1 00:22:23.370 --rc geninfo_all_blocks=1 00:22:23.370 --rc geninfo_unexecuted_blocks=1 00:22:23.370 00:22:23.370 ' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.370 --rc genhtml_branch_coverage=1 00:22:23.370 --rc genhtml_function_coverage=1 00:22:23.370 --rc genhtml_legend=1 00:22:23.370 --rc geninfo_all_blocks=1 00:22:23.370 --rc geninfo_unexecuted_blocks=1 00:22:23.370 00:22:23.370 ' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:23.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.370 --rc genhtml_branch_coverage=1 00:22:23.370 --rc genhtml_function_coverage=1 00:22:23.370 --rc genhtml_legend=1 00:22:23.370 --rc geninfo_all_blocks=1 00:22:23.370 --rc geninfo_unexecuted_blocks=1 00:22:23.370 00:22:23.370 ' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.370 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.371 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.371 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.371 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.371 09:24:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.941 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:29.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:29.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:29.942 Found net devices under 0000:86:00.0: cvl_0_0 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:29.942 Found net devices under 0000:86:00.1: cvl_0_1 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:22:29.942 00:22:29.942 --- 10.0.0.2 ping statistics --- 00:22:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.942 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:22:29.942 09:24:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:22:29.942 00:22:29.942 --- 10.0.0.1 ping statistics --- 00:22:29.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.942 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1186795 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1186795 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1186795 ']' 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:29.942 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.942 [2024-11-19 09:24:30.109362] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:29.942 [2024-11-19 09:24:30.109409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.942 [2024-11-19 09:24:30.184295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.942 [2024-11-19 09:24:30.226825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.942 [2024-11-19 09:24:30.226864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.942 [2024-11-19 09:24:30.226873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.942 [2024-11-19 09:24:30.226879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.943 [2024-11-19 09:24:30.226885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.943 [2024-11-19 09:24:30.228329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.943 [2024-11-19 09:24:30.228460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.943 [2024-11-19 09:24:30.228579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.943 [2024-11-19 09:24:30.228580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 [2024-11-19 09:24:30.373967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 Malloc0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 [2024-11-19 09:24:30.439240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 [ 00:22:29.943 { 00:22:29.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.943 "subtype": "Discovery", 00:22:29.943 "listen_addresses": [], 00:22:29.943 "allow_any_host": true, 00:22:29.943 "hosts": [] 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.943 "subtype": "NVMe", 00:22:29.943 "listen_addresses": [ 00:22:29.943 { 00:22:29.943 "trtype": "TCP", 00:22:29.943 "adrfam": "IPv4", 00:22:29.943 "traddr": "10.0.0.2", 00:22:29.943 "trsvcid": "4420" 00:22:29.943 } 00:22:29.943 ], 00:22:29.943 "allow_any_host": true, 00:22:29.943 "hosts": [], 00:22:29.943 "serial_number": "SPDK00000000000001", 00:22:29.943 "model_number": "SPDK bdev Controller", 00:22:29.943 "max_namespaces": 2, 00:22:29.943 "min_cntlid": 1, 00:22:29.943 "max_cntlid": 65519, 00:22:29.943 "namespaces": [ 00:22:29.943 { 00:22:29.943 "nsid": 1, 00:22:29.943 "bdev_name": "Malloc0", 00:22:29.943 "name": "Malloc0", 00:22:29.943 "nguid": "F322A229004147CBA5479C72CA3D610A", 00:22:29.943 "uuid": "f322a229-0041-47cb-a547-9c72ca3d610a" 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1186967 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 Malloc1 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 Asynchronous Event Request test 00:22:29.943 Attaching to 10.0.0.2 00:22:29.943 Attached to 10.0.0.2 00:22:29.943 Registering asynchronous event callbacks... 00:22:29.943 Starting namespace attribute notice tests for all controllers... 00:22:29.943 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:29.943 aer_cb - Changed Namespace 00:22:29.943 Cleaning up... 00:22:29.943 [ 00:22:29.943 { 00:22:29.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.943 "subtype": "Discovery", 00:22:29.943 "listen_addresses": [], 00:22:29.943 "allow_any_host": true, 00:22:29.943 "hosts": [] 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.943 "subtype": "NVMe", 00:22:29.943 "listen_addresses": [ 00:22:29.943 { 00:22:29.943 "trtype": "TCP", 00:22:29.943 "adrfam": "IPv4", 00:22:29.943 "traddr": "10.0.0.2", 00:22:29.943 "trsvcid": "4420" 00:22:29.943 } 00:22:29.943 ], 00:22:29.943 "allow_any_host": true, 00:22:29.943 "hosts": [], 00:22:29.943 "serial_number": "SPDK00000000000001", 00:22:29.943 "model_number": "SPDK bdev Controller", 00:22:29.943 "max_namespaces": 2, 00:22:29.943 "min_cntlid": 1, 00:22:29.943 "max_cntlid": 65519, 00:22:29.943 "namespaces": [ 00:22:29.943 { 00:22:29.943 "nsid": 1, 00:22:29.943 "bdev_name": "Malloc0", 00:22:29.943 "name": "Malloc0", 00:22:29.943 "nguid": "F322A229004147CBA5479C72CA3D610A", 00:22:29.943 "uuid": "f322a229-0041-47cb-a547-9c72ca3d610a" 00:22:29.943 }, 00:22:29.943 { 00:22:29.943 "nsid": 2, 00:22:29.943 "bdev_name": "Malloc1", 00:22:29.943 "name": "Malloc1", 00:22:29.943 "nguid": "72A5D6ED6A514EFA96DC57FC0416C353", 00:22:29.943 "uuid": "72a5d6ed-6a51-4efa-96dc-57fc0416c353" 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 } 00:22:29.943 ] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1186967 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.943 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.944 rmmod nvme_tcp 00:22:29.944 rmmod nvme_fabrics 00:22:29.944 rmmod nvme_keyring 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1186795 ']' 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1186795 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1186795 ']' 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1186795 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1186795 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1186795' 00:22:29.944 killing process with pid 1186795 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1186795 00:22:29.944 09:24:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1186795 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.203 09:24:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.107 09:24:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.107 00:22:32.107 real 0m9.207s 00:22:32.107 user 0m5.108s 00:22:32.107 sys 0m4.881s 00:22:32.107 09:24:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:32.107 09:24:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:32.107 ************************************ 00:22:32.107 END TEST nvmf_aer 00:22:32.107 ************************************ 00:22:32.366 09:24:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:32.366 09:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:32.366 09:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:32.366 09:24:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.366 ************************************ 00:22:32.366 START TEST nvmf_async_init 00:22:32.366 ************************************ 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:32.367 * Looking for test storage... 00:22:32.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.367 --rc genhtml_branch_coverage=1 00:22:32.367 --rc genhtml_function_coverage=1 00:22:32.367 --rc genhtml_legend=1 00:22:32.367 --rc geninfo_all_blocks=1 00:22:32.367 --rc geninfo_unexecuted_blocks=1 00:22:32.367 00:22:32.367 ' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.367 --rc genhtml_branch_coverage=1 00:22:32.367 --rc genhtml_function_coverage=1 00:22:32.367 --rc genhtml_legend=1 00:22:32.367 --rc geninfo_all_blocks=1 00:22:32.367 --rc geninfo_unexecuted_blocks=1 00:22:32.367 00:22:32.367 ' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.367 --rc genhtml_branch_coverage=1 00:22:32.367 --rc genhtml_function_coverage=1 00:22:32.367 --rc genhtml_legend=1 00:22:32.367 --rc geninfo_all_blocks=1 00:22:32.367 --rc geninfo_unexecuted_blocks=1 00:22:32.367 00:22:32.367 ' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.367 --rc genhtml_branch_coverage=1 00:22:32.367 --rc genhtml_function_coverage=1 00:22:32.367 --rc genhtml_legend=1 00:22:32.367 --rc geninfo_all_blocks=1 00:22:32.367 --rc geninfo_unexecuted_blocks=1 00:22:32.367 00:22:32.367 ' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.367 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:32.626 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=56cb39df5efb4fe4a83e02fb38bc76f1 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.627 09:24:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:39.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:39.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:39.196 Found net devices under 0000:86:00.0: cvl_0_0 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:39.196 Found net devices under 0000:86:00.1: cvl_0_1 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.196 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:22:39.197 00:22:39.197 --- 10.0.0.2 ping statistics --- 00:22:39.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.197 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:39.197 00:22:39.197 --- 10.0.0.1 ping statistics --- 00:22:39.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.197 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1190564 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1190564 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1190564 ']' 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 [2024-11-19 09:24:39.406851] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:39.197 [2024-11-19 09:24:39.406895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.197 [2024-11-19 09:24:39.486992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.197 [2024-11-19 09:24:39.528429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.197 [2024-11-19 09:24:39.528469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.197 [2024-11-19 09:24:39.528476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.197 [2024-11-19 09:24:39.528482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.197 [2024-11-19 09:24:39.528487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.197 [2024-11-19 09:24:39.529052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 [2024-11-19 09:24:39.664594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 null0 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 56cb39df5efb4fe4a83e02fb38bc76f1 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 [2024-11-19 09:24:39.712849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 nvme0n1 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.197 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.197 [ 00:22:39.197 { 00:22:39.197 "name": "nvme0n1", 00:22:39.197 "aliases": [ 00:22:39.197 "56cb39df-5efb-4fe4-a83e-02fb38bc76f1" 00:22:39.197 ], 00:22:39.197 "product_name": "NVMe disk", 00:22:39.197 "block_size": 512, 00:22:39.197 "num_blocks": 2097152, 00:22:39.197 "uuid": "56cb39df-5efb-4fe4-a83e-02fb38bc76f1", 00:22:39.197 "numa_id": 1, 00:22:39.197 "assigned_rate_limits": { 00:22:39.197 "rw_ios_per_sec": 0, 00:22:39.197 "rw_mbytes_per_sec": 0, 00:22:39.197 "r_mbytes_per_sec": 0, 00:22:39.197 "w_mbytes_per_sec": 0 00:22:39.197 }, 00:22:39.197 "claimed": false, 00:22:39.197 "zoned": false, 00:22:39.197 "supported_io_types": { 00:22:39.197 "read": true, 00:22:39.197 "write": true, 00:22:39.197 "unmap": false, 00:22:39.197 "flush": true, 00:22:39.197 "reset": true, 00:22:39.197 "nvme_admin": true, 00:22:39.197 "nvme_io": true, 00:22:39.197 "nvme_io_md": false, 00:22:39.197 "write_zeroes": true, 00:22:39.197 "zcopy": false, 00:22:39.197 "get_zone_info": false, 00:22:39.197 "zone_management": false, 00:22:39.197 "zone_append": false, 00:22:39.197 "compare": true, 00:22:39.197 "compare_and_write": true, 00:22:39.197 "abort": true, 00:22:39.197 "seek_hole": false, 00:22:39.197 "seek_data": false, 00:22:39.197 "copy": true, 00:22:39.197 "nvme_iov_md": false 00:22:39.197 }, 00:22:39.197 "memory_domains": [ 00:22:39.197 { 00:22:39.197 "dma_device_id": "system", 00:22:39.197 "dma_device_type": 1 00:22:39.197 } 00:22:39.197 ], 00:22:39.197 "driver_specific": { 00:22:39.197 "nvme": [ 00:22:39.197 { 00:22:39.197 "trid": { 00:22:39.197 "trtype": "TCP", 00:22:39.197 "adrfam": "IPv4", 00:22:39.197 "traddr": "10.0.0.2", 00:22:39.197 "trsvcid": "4420", 00:22:39.197 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.197 }, 00:22:39.197 "ctrlr_data": { 00:22:39.197 "cntlid": 1, 00:22:39.197 "vendor_id": "0x8086", 00:22:39.197 "model_number": "SPDK bdev Controller", 00:22:39.197 "serial_number": "00000000000000000000", 00:22:39.197 "firmware_revision": "25.01", 00:22:39.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.197 "oacs": { 00:22:39.197 "security": 0, 00:22:39.197 "format": 0, 00:22:39.197 "firmware": 0, 00:22:39.197 "ns_manage": 0 00:22:39.198 }, 00:22:39.198 "multi_ctrlr": true, 00:22:39.198 "ana_reporting": false 00:22:39.198 }, 00:22:39.198 "vs": { 00:22:39.198 "nvme_version": "1.3" 00:22:39.198 }, 00:22:39.198 "ns_data": { 00:22:39.198 "id": 1, 00:22:39.198 "can_share": true 00:22:39.198 } 00:22:39.198 } 00:22:39.198 ], 00:22:39.198 "mp_policy": "active_passive" 00:22:39.198 } 00:22:39.198 } 00:22:39.198 ] 00:22:39.198 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:39.198 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 [2024-11-19 09:24:39.973493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:39.198 [2024-11-19 09:24:39.973548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15960a0 (9): Bad file descriptor 00:22:39.198 [2024-11-19 09:24:40.105047] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 [ 00:22:39.198 { 00:22:39.198 "name": "nvme0n1", 00:22:39.198 "aliases": [ 00:22:39.198 "56cb39df-5efb-4fe4-a83e-02fb38bc76f1" 00:22:39.198 ], 00:22:39.198 "product_name": "NVMe disk", 00:22:39.198 "block_size": 512, 00:22:39.198 "num_blocks": 2097152, 00:22:39.198 "uuid": "56cb39df-5efb-4fe4-a83e-02fb38bc76f1", 00:22:39.198 "numa_id": 1, 00:22:39.198 "assigned_rate_limits": { 00:22:39.198 "rw_ios_per_sec": 0, 00:22:39.198 "rw_mbytes_per_sec": 0, 00:22:39.198 "r_mbytes_per_sec": 0, 00:22:39.198 "w_mbytes_per_sec": 0 00:22:39.198 }, 00:22:39.198 "claimed": false, 00:22:39.198 "zoned": false, 00:22:39.198 "supported_io_types": { 00:22:39.198 "read": true, 00:22:39.198 "write": true, 00:22:39.198 "unmap": false, 00:22:39.198 "flush": true, 00:22:39.198 "reset": true, 00:22:39.198 "nvme_admin": true, 00:22:39.198 "nvme_io": true, 00:22:39.198 "nvme_io_md": false, 00:22:39.198 "write_zeroes": true, 00:22:39.198 "zcopy": false, 00:22:39.198 "get_zone_info": false, 00:22:39.198 "zone_management": false, 00:22:39.198 "zone_append": false, 00:22:39.198 "compare": true, 00:22:39.198 "compare_and_write": true, 00:22:39.198 "abort": true, 00:22:39.198 "seek_hole": false, 00:22:39.198 "seek_data": false, 00:22:39.198 "copy": true, 00:22:39.198 "nvme_iov_md": false 00:22:39.198 }, 00:22:39.198 "memory_domains": [ 00:22:39.198 { 00:22:39.198 "dma_device_id": "system", 00:22:39.198 "dma_device_type": 1 00:22:39.198 } 00:22:39.198 ], 00:22:39.198 "driver_specific": { 00:22:39.198 "nvme": [ 00:22:39.198 { 00:22:39.198 "trid": { 00:22:39.198 "trtype": "TCP", 00:22:39.198 "adrfam": "IPv4", 00:22:39.198 "traddr": "10.0.0.2", 00:22:39.198 "trsvcid": "4420", 00:22:39.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.198 }, 00:22:39.198 "ctrlr_data": { 00:22:39.198 "cntlid": 2, 00:22:39.198 "vendor_id": "0x8086", 00:22:39.198 "model_number": "SPDK bdev Controller", 00:22:39.198 "serial_number": "00000000000000000000", 00:22:39.198 "firmware_revision": "25.01", 00:22:39.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.198 "oacs": { 00:22:39.198 "security": 0, 00:22:39.198 "format": 0, 00:22:39.198 "firmware": 0, 00:22:39.198 "ns_manage": 0 00:22:39.198 }, 00:22:39.198 "multi_ctrlr": true, 00:22:39.198 "ana_reporting": false 00:22:39.198 }, 00:22:39.198 "vs": { 00:22:39.198 "nvme_version": "1.3" 00:22:39.198 }, 00:22:39.198 "ns_data": { 00:22:39.198 "id": 1, 00:22:39.198 "can_share": true 00:22:39.198 } 00:22:39.198 } 00:22:39.198 ], 00:22:39.198 "mp_policy": "active_passive" 00:22:39.198 } 00:22:39.198 } 00:22:39.198 ] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Yq5jcdI7AR 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Yq5jcdI7AR 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.Yq5jcdI7AR 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 [2024-11-19 09:24:40.178185] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.198 [2024-11-19 09:24:40.178316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.198 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.198 [2024-11-19 09:24:40.198248] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.457 nvme0n1 00:22:39.457 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.457 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:39.457 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.457 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.457 [ 00:22:39.457 { 00:22:39.457 "name": "nvme0n1", 00:22:39.457 "aliases": [ 00:22:39.457 "56cb39df-5efb-4fe4-a83e-02fb38bc76f1" 00:22:39.457 ], 00:22:39.457 "product_name": "NVMe disk", 00:22:39.457 "block_size": 512, 00:22:39.457 "num_blocks": 2097152, 00:22:39.458 "uuid": "56cb39df-5efb-4fe4-a83e-02fb38bc76f1", 00:22:39.458 "numa_id": 1, 00:22:39.458 "assigned_rate_limits": { 00:22:39.458 "rw_ios_per_sec": 0, 00:22:39.458 "rw_mbytes_per_sec": 0, 00:22:39.458 "r_mbytes_per_sec": 0, 00:22:39.458 "w_mbytes_per_sec": 0 00:22:39.458 }, 00:22:39.458 "claimed": false, 00:22:39.458 "zoned": false, 00:22:39.458 "supported_io_types": { 00:22:39.458 "read": true, 00:22:39.458 "write": true, 00:22:39.458 "unmap": false, 00:22:39.458 "flush": true, 00:22:39.458 "reset": true, 00:22:39.458 "nvme_admin": true, 00:22:39.458 "nvme_io": true, 00:22:39.458 "nvme_io_md": false, 00:22:39.458 "write_zeroes": true, 00:22:39.458 "zcopy": false, 00:22:39.458 "get_zone_info": false, 00:22:39.458 "zone_management": false, 00:22:39.458 "zone_append": false, 00:22:39.458 "compare": true, 00:22:39.458 "compare_and_write": true, 00:22:39.458 "abort": true, 00:22:39.458 "seek_hole": false, 00:22:39.458 "seek_data": false, 00:22:39.458 "copy": true, 00:22:39.458 "nvme_iov_md": false 00:22:39.458 }, 00:22:39.458 "memory_domains": [ 00:22:39.458 { 00:22:39.458 "dma_device_id": "system", 00:22:39.458 "dma_device_type": 1 00:22:39.458 } 00:22:39.458 ], 00:22:39.458 "driver_specific": { 00:22:39.458 "nvme": [ 00:22:39.458 { 00:22:39.458 "trid": { 00:22:39.458 "trtype": "TCP", 00:22:39.458 "adrfam": "IPv4", 00:22:39.458 "traddr": "10.0.0.2", 00:22:39.458 "trsvcid": "4421", 00:22:39.458 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:39.458 }, 00:22:39.458 "ctrlr_data": { 00:22:39.458 "cntlid": 3, 00:22:39.458 "vendor_id": "0x8086", 00:22:39.458 "model_number": "SPDK bdev Controller", 00:22:39.458 "serial_number": "00000000000000000000", 00:22:39.458 "firmware_revision": "25.01", 00:22:39.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:39.458 "oacs": { 00:22:39.458 "security": 0, 00:22:39.458 "format": 0, 00:22:39.458 "firmware": 0, 00:22:39.458 "ns_manage": 0 00:22:39.458 }, 00:22:39.458 "multi_ctrlr": true, 00:22:39.458 "ana_reporting": false 00:22:39.458 }, 00:22:39.458 "vs": { 00:22:39.458 "nvme_version": "1.3" 00:22:39.458 }, 00:22:39.458 "ns_data": { 00:22:39.458 "id": 1, 00:22:39.458 "can_share": true 00:22:39.458 } 00:22:39.458 } 00:22:39.458 ], 00:22:39.458 "mp_policy": "active_passive" 00:22:39.458 } 00:22:39.458 } 00:22:39.458 ] 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.Yq5jcdI7AR 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:39.458 rmmod nvme_tcp 00:22:39.458 rmmod nvme_fabrics 00:22:39.458 rmmod nvme_keyring 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1190564 ']' 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1190564 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1190564 ']' 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1190564 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1190564 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1190564' 00:22:39.458 killing process with pid 1190564 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1190564 00:22:39.458 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1190564 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.718 09:24:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.630 00:22:41.630 real 0m9.424s 00:22:41.630 user 0m3.094s 00:22:41.630 sys 0m4.761s 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:41.630 ************************************ 00:22:41.630 END TEST nvmf_async_init 00:22:41.630 ************************************ 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:41.630 09:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.891 ************************************ 00:22:41.891 START TEST dma 00:22:41.891 ************************************ 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:41.891 * Looking for test storage... 00:22:41.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.891 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.892 --rc genhtml_branch_coverage=1 00:22:41.892 --rc genhtml_function_coverage=1 00:22:41.892 --rc genhtml_legend=1 00:22:41.892 --rc geninfo_all_blocks=1 00:22:41.892 --rc geninfo_unexecuted_blocks=1 00:22:41.892 00:22:41.892 ' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.892 --rc genhtml_branch_coverage=1 00:22:41.892 --rc genhtml_function_coverage=1 00:22:41.892 --rc genhtml_legend=1 00:22:41.892 --rc geninfo_all_blocks=1 00:22:41.892 --rc geninfo_unexecuted_blocks=1 00:22:41.892 00:22:41.892 ' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.892 --rc genhtml_branch_coverage=1 00:22:41.892 --rc genhtml_function_coverage=1 00:22:41.892 --rc genhtml_legend=1 00:22:41.892 --rc geninfo_all_blocks=1 00:22:41.892 --rc geninfo_unexecuted_blocks=1 00:22:41.892 00:22:41.892 ' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.892 --rc genhtml_branch_coverage=1 00:22:41.892 --rc genhtml_function_coverage=1 00:22:41.892 --rc genhtml_legend=1 00:22:41.892 --rc geninfo_all_blocks=1 00:22:41.892 --rc geninfo_unexecuted_blocks=1 00:22:41.892 00:22:41.892 ' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:41.892 00:22:41.892 real 0m0.203s 00:22:41.892 user 0m0.127s 00:22:41.892 sys 0m0.090s 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:41.892 09:24:42 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:41.892 ************************************ 00:22:41.892 END TEST dma 00:22:41.892 ************************************ 00:22:42.152 09:24:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:42.152 09:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:42.152 09:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:42.152 09:24:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.152 ************************************ 00:22:42.152 START TEST nvmf_identify 00:22:42.152 ************************************ 00:22:42.152 09:24:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:42.152 * Looking for test storage... 00:22:42.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.152 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.153 --rc genhtml_branch_coverage=1 00:22:42.153 --rc genhtml_function_coverage=1 00:22:42.153 --rc genhtml_legend=1 00:22:42.153 --rc geninfo_all_blocks=1 00:22:42.153 --rc geninfo_unexecuted_blocks=1 00:22:42.153 00:22:42.153 ' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.153 --rc genhtml_branch_coverage=1 00:22:42.153 --rc genhtml_function_coverage=1 00:22:42.153 --rc genhtml_legend=1 00:22:42.153 --rc geninfo_all_blocks=1 00:22:42.153 --rc geninfo_unexecuted_blocks=1 00:22:42.153 00:22:42.153 ' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.153 --rc genhtml_branch_coverage=1 00:22:42.153 --rc genhtml_function_coverage=1 00:22:42.153 --rc genhtml_legend=1 00:22:42.153 --rc geninfo_all_blocks=1 00:22:42.153 --rc geninfo_unexecuted_blocks=1 00:22:42.153 00:22:42.153 ' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:42.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.153 --rc genhtml_branch_coverage=1 00:22:42.153 --rc genhtml_function_coverage=1 00:22:42.153 --rc genhtml_legend=1 00:22:42.153 --rc geninfo_all_blocks=1 00:22:42.153 --rc geninfo_unexecuted_blocks=1 00:22:42.153 00:22:42.153 ' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.153 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.154 09:24:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:48.725 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.726 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.726 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.726 09:24:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:22:48.726 00:22:48.726 --- 10.0.0.2 ping statistics --- 00:22:48.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.726 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:22:48.726 00:22:48.726 --- 10.0.0.1 ping statistics --- 00:22:48.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.726 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1194269 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1194269 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.726 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1194269 ']' 00:22:48.727 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.727 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:48.727 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.727 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:48.727 09:24:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:48.727 [2024-11-19 09:24:49.213048] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:48.727 [2024-11-19 09:24:49.213094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.727 [2024-11-19 09:24:49.293325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.727 [2024-11-19 09:24:49.336347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.727 [2024-11-19 09:24:49.336389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.727 [2024-11-19 09:24:49.336397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.727 [2024-11-19 09:24:49.336403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.727 [2024-11-19 09:24:49.336408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.727 [2024-11-19 09:24:49.337873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.727 [2024-11-19 09:24:49.338006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.727 [2024-11-19 09:24:49.338043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.727 [2024-11-19 09:24:49.338045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.296 [2024-11-19 09:24:50.061134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.296 Malloc0 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.296 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.297 [2024-11-19 09:24:50.165332] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.297 [ 00:22:49.297 { 00:22:49.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:49.297 "subtype": "Discovery", 00:22:49.297 "listen_addresses": [ 00:22:49.297 { 00:22:49.297 "trtype": "TCP", 00:22:49.297 "adrfam": "IPv4", 00:22:49.297 "traddr": "10.0.0.2", 00:22:49.297 "trsvcid": "4420" 00:22:49.297 } 00:22:49.297 ], 00:22:49.297 "allow_any_host": true, 00:22:49.297 "hosts": [] 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.297 "subtype": "NVMe", 00:22:49.297 "listen_addresses": [ 00:22:49.297 { 00:22:49.297 "trtype": "TCP", 00:22:49.297 "adrfam": "IPv4", 00:22:49.297 "traddr": "10.0.0.2", 00:22:49.297 "trsvcid": "4420" 00:22:49.297 } 00:22:49.297 ], 00:22:49.297 "allow_any_host": true, 00:22:49.297 "hosts": [], 00:22:49.297 "serial_number": "SPDK00000000000001", 00:22:49.297 "model_number": "SPDK bdev Controller", 00:22:49.297 "max_namespaces": 32, 00:22:49.297 "min_cntlid": 1, 00:22:49.297 "max_cntlid": 65519, 00:22:49.297 "namespaces": [ 00:22:49.297 { 00:22:49.297 "nsid": 1, 00:22:49.297 "bdev_name": "Malloc0", 00:22:49.297 "name": "Malloc0", 00:22:49.297 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:49.297 "eui64": "ABCDEF0123456789", 00:22:49.297 "uuid": "96da04ee-1452-4538-99d8-27e9fc089173" 00:22:49.297 } 00:22:49.297 ] 00:22:49.297 } 00:22:49.297 ] 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.297 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:49.297 [2024-11-19 09:24:50.222669] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:49.297 [2024-11-19 09:24:50.222702] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194421 ] 00:22:49.297 [2024-11-19 09:24:50.262917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:49.297 [2024-11-19 09:24:50.266972] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:49.297 [2024-11-19 09:24:50.266980] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:49.297 [2024-11-19 09:24:50.266991] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:49.297 [2024-11-19 09:24:50.267000] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:49.297 [2024-11-19 09:24:50.267545] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:49.297 [2024-11-19 09:24:50.267580] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x239b690 0 00:22:49.297 [2024-11-19 09:24:50.281963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:49.297 [2024-11-19 09:24:50.281982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:49.297 [2024-11-19 09:24:50.281986] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:49.297 [2024-11-19 09:24:50.281989] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:49.297 [2024-11-19 09:24:50.282024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.282030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.282034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.297 [2024-11-19 09:24:50.282047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:49.297 [2024-11-19 09:24:50.282065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.297 [2024-11-19 09:24:50.289956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.297 [2024-11-19 09:24:50.289965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.297 [2024-11-19 09:24:50.289971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.289976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.297 [2024-11-19 09:24:50.289989] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:49.297 [2024-11-19 09:24:50.289997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:49.297 [2024-11-19 09:24:50.290002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:49.297 [2024-11-19 09:24:50.290016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.297 [2024-11-19 09:24:50.290030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.297 [2024-11-19 09:24:50.290042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.297 [2024-11-19 09:24:50.290204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.297 [2024-11-19 09:24:50.290211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.297 [2024-11-19 09:24:50.290214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.297 [2024-11-19 09:24:50.290223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:49.297 [2024-11-19 09:24:50.290229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:49.297 [2024-11-19 09:24:50.290235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.297 [2024-11-19 09:24:50.290248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.297 [2024-11-19 09:24:50.290259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.297 [2024-11-19 09:24:50.290324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.297 [2024-11-19 09:24:50.290330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.297 [2024-11-19 09:24:50.290333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.297 [2024-11-19 09:24:50.290342] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:49.297 [2024-11-19 09:24:50.290349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:49.297 [2024-11-19 09:24:50.290355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.297 [2024-11-19 09:24:50.290367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.297 [2024-11-19 09:24:50.290377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.297 [2024-11-19 09:24:50.290443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.297 [2024-11-19 09:24:50.290449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.297 [2024-11-19 09:24:50.290454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.297 [2024-11-19 09:24:50.290463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:49.297 [2024-11-19 09:24:50.290471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.297 [2024-11-19 09:24:50.290478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.297 [2024-11-19 09:24:50.290484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.297 [2024-11-19 09:24:50.290493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.298 [2024-11-19 09:24:50.290562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.298 [2024-11-19 09:24:50.290567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.298 [2024-11-19 09:24:50.290570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.298 [2024-11-19 09:24:50.290578] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:49.298 [2024-11-19 09:24:50.290583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:49.298 [2024-11-19 09:24:50.290589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:49.298 [2024-11-19 09:24:50.290697] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:49.298 [2024-11-19 09:24:50.290702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:49.298 [2024-11-19 09:24:50.290709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.290722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.298 [2024-11-19 09:24:50.290732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.298 [2024-11-19 09:24:50.290812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.298 [2024-11-19 09:24:50.290818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.298 [2024-11-19 09:24:50.290821] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.298 [2024-11-19 09:24:50.290829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:49.298 [2024-11-19 09:24:50.290837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.290849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.298 [2024-11-19 09:24:50.290859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.298 [2024-11-19 09:24:50.290921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.298 [2024-11-19 09:24:50.290927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.298 [2024-11-19 09:24:50.290930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.298 [2024-11-19 09:24:50.290938] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:49.298 [2024-11-19 09:24:50.290942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:49.298 [2024-11-19 09:24:50.290954] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:49.298 [2024-11-19 09:24:50.290964] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:49.298 [2024-11-19 09:24:50.290972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.290976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.290982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.298 [2024-11-19 09:24:50.290991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.298 [2024-11-19 09:24:50.291075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.298 [2024-11-19 09:24:50.291081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.298 [2024-11-19 09:24:50.291084] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291088] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x239b690): datao=0, datal=4096, cccid=0 00:22:49.298 [2024-11-19 09:24:50.291092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23fd100) on tqpair(0x239b690): expected_datao=0, payload_size=4096 00:22:49.298 [2024-11-19 09:24:50.291097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291118] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291122] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.298 [2024-11-19 09:24:50.291163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.298 [2024-11-19 09:24:50.291166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.298 [2024-11-19 09:24:50.291176] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:49.298 [2024-11-19 09:24:50.291181] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:49.298 [2024-11-19 09:24:50.291185] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:49.298 [2024-11-19 09:24:50.291190] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:49.298 [2024-11-19 09:24:50.291196] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:49.298 [2024-11-19 09:24:50.291201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:49.298 [2024-11-19 09:24:50.291209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:49.298 [2024-11-19 09:24:50.291215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.291230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.298 [2024-11-19 09:24:50.291240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.298 [2024-11-19 09:24:50.291302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.298 [2024-11-19 09:24:50.291307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.298 [2024-11-19 09:24:50.291311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.298 [2024-11-19 09:24:50.291323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.291336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.298 [2024-11-19 09:24:50.291341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.291352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.298 [2024-11-19 09:24:50.291357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.291369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.298 [2024-11-19 09:24:50.291374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.291385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.298 [2024-11-19 09:24:50.291389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:49.298 [2024-11-19 09:24:50.291397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:49.298 [2024-11-19 09:24:50.291403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x239b690) 00:22:49.298 [2024-11-19 09:24:50.291412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.298 [2024-11-19 09:24:50.291423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd100, cid 0, qid 0 00:22:49.298 [2024-11-19 09:24:50.291428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd280, cid 1, qid 0 00:22:49.298 [2024-11-19 09:24:50.291432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd400, cid 2, qid 0 00:22:49.298 [2024-11-19 09:24:50.291436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.298 [2024-11-19 09:24:50.291442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd700, cid 4, qid 0 00:22:49.298 [2024-11-19 09:24:50.291537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.298 [2024-11-19 09:24:50.291542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.298 [2024-11-19 09:24:50.291546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.298 [2024-11-19 09:24:50.291549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd700) on tqpair=0x239b690 00:22:49.298 [2024-11-19 09:24:50.291556] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:49.298 [2024-11-19 09:24:50.291561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:49.299 [2024-11-19 09:24:50.291569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.291573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x239b690) 00:22:49.299 [2024-11-19 09:24:50.291579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.299 [2024-11-19 09:24:50.291588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd700, cid 4, qid 0 00:22:49.299 [2024-11-19 09:24:50.291662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.299 [2024-11-19 09:24:50.291668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.299 [2024-11-19 09:24:50.291671] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.291674] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x239b690): datao=0, datal=4096, cccid=4 00:22:49.299 [2024-11-19 09:24:50.291678] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23fd700) on tqpair(0x239b690): expected_datao=0, payload_size=4096 00:22:49.299 [2024-11-19 09:24:50.291682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.291692] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.291696] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.299 [2024-11-19 09:24:50.332096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.299 [2024-11-19 09:24:50.332100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd700) on tqpair=0x239b690 00:22:49.299 [2024-11-19 09:24:50.332118] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:49.299 [2024-11-19 09:24:50.332144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x239b690) 00:22:49.299 [2024-11-19 09:24:50.332155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.299 [2024-11-19 09:24:50.332161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x239b690) 00:22:49.299 [2024-11-19 09:24:50.332173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.299 [2024-11-19 09:24:50.332188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd700, cid 4, qid 0 00:22:49.299 [2024-11-19 09:24:50.332194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd880, cid 5, qid 0 00:22:49.299 [2024-11-19 09:24:50.332297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.299 [2024-11-19 09:24:50.332302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.299 [2024-11-19 09:24:50.332308] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332312] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x239b690): datao=0, datal=1024, cccid=4 00:22:49.299 [2024-11-19 09:24:50.332316] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23fd700) on tqpair(0x239b690): expected_datao=0, payload_size=1024 00:22:49.299 [2024-11-19 09:24:50.332319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332325] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332329] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332334] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.299 [2024-11-19 09:24:50.332338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.299 [2024-11-19 09:24:50.332342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.299 [2024-11-19 09:24:50.332345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd880) on tqpair=0x239b690 00:22:49.561 [2024-11-19 09:24:50.373067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.561 [2024-11-19 09:24:50.373087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.561 [2024-11-19 09:24:50.373091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd700) on tqpair=0x239b690 00:22:49.561 [2024-11-19 09:24:50.373107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x239b690) 00:22:49.561 [2024-11-19 09:24:50.373119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.561 [2024-11-19 09:24:50.373137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd700, cid 4, qid 0 00:22:49.561 [2024-11-19 09:24:50.373268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.561 [2024-11-19 09:24:50.373273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.561 [2024-11-19 09:24:50.373276] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373280] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x239b690): datao=0, datal=3072, cccid=4 00:22:49.561 [2024-11-19 09:24:50.373284] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23fd700) on tqpair(0x239b690): expected_datao=0, payload_size=3072 00:22:49.561 [2024-11-19 09:24:50.373288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373299] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373303] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.561 [2024-11-19 09:24:50.373322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.561 [2024-11-19 09:24:50.373325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd700) on tqpair=0x239b690 00:22:49.561 [2024-11-19 09:24:50.373336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x239b690) 00:22:49.561 [2024-11-19 09:24:50.373346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.561 [2024-11-19 09:24:50.373359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd700, cid 4, qid 0 00:22:49.561 [2024-11-19 09:24:50.373434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.561 [2024-11-19 09:24:50.373440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.561 [2024-11-19 09:24:50.373443] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373449] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x239b690): datao=0, datal=8, cccid=4 00:22:49.561 [2024-11-19 09:24:50.373453] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23fd700) on tqpair(0x239b690): expected_datao=0, payload_size=8 00:22:49.561 [2024-11-19 09:24:50.373457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373463] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.373466] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.414128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.561 [2024-11-19 09:24:50.414139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.561 [2024-11-19 09:24:50.414142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.561 [2024-11-19 09:24:50.414145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd700) on tqpair=0x239b690 00:22:49.561 ===================================================== 00:22:49.561 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:49.561 ===================================================== 00:22:49.561 Controller Capabilities/Features 00:22:49.561 ================================ 00:22:49.561 Vendor ID: 0000 00:22:49.561 Subsystem Vendor ID: 0000 00:22:49.561 Serial Number: .................... 00:22:49.561 Model Number: ........................................ 00:22:49.561 Firmware Version: 25.01 00:22:49.561 Recommended Arb Burst: 0 00:22:49.561 IEEE OUI Identifier: 00 00 00 00:22:49.561 Multi-path I/O 00:22:49.561 May have multiple subsystem ports: No 00:22:49.561 May have multiple controllers: No 00:22:49.561 Associated with SR-IOV VF: No 00:22:49.561 Max Data Transfer Size: 131072 00:22:49.561 Max Number of Namespaces: 0 00:22:49.561 Max Number of I/O Queues: 1024 00:22:49.561 NVMe Specification Version (VS): 1.3 00:22:49.561 NVMe Specification Version (Identify): 1.3 00:22:49.561 Maximum Queue Entries: 128 00:22:49.561 Contiguous Queues Required: Yes 00:22:49.561 Arbitration Mechanisms Supported 00:22:49.561 Weighted Round Robin: Not Supported 00:22:49.561 Vendor Specific: Not Supported 00:22:49.561 Reset Timeout: 15000 ms 00:22:49.561 Doorbell Stride: 4 bytes 00:22:49.561 NVM Subsystem Reset: Not Supported 00:22:49.561 Command Sets Supported 00:22:49.561 NVM Command Set: Supported 00:22:49.561 Boot Partition: Not Supported 00:22:49.561 Memory Page Size Minimum: 4096 bytes 00:22:49.561 Memory Page Size Maximum: 4096 bytes 00:22:49.561 Persistent Memory Region: Not Supported 00:22:49.561 Optional Asynchronous Events Supported 00:22:49.561 Namespace Attribute Notices: Not Supported 00:22:49.561 Firmware Activation Notices: Not Supported 00:22:49.561 ANA Change Notices: Not Supported 00:22:49.561 PLE Aggregate Log Change Notices: Not Supported 00:22:49.561 LBA Status Info Alert Notices: Not Supported 00:22:49.561 EGE Aggregate Log Change Notices: Not Supported 00:22:49.561 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.561 Zone Descriptor Change Notices: Not Supported 00:22:49.562 Discovery Log Change Notices: Supported 00:22:49.562 Controller Attributes 00:22:49.562 128-bit Host Identifier: Not Supported 00:22:49.562 Non-Operational Permissive Mode: Not Supported 00:22:49.562 NVM Sets: Not Supported 00:22:49.562 Read Recovery Levels: Not Supported 00:22:49.562 Endurance Groups: Not Supported 00:22:49.562 Predictable Latency Mode: Not Supported 00:22:49.562 Traffic Based Keep ALive: Not Supported 00:22:49.562 Namespace Granularity: Not Supported 00:22:49.562 SQ Associations: Not Supported 00:22:49.562 UUID List: Not Supported 00:22:49.562 Multi-Domain Subsystem: Not Supported 00:22:49.562 Fixed Capacity Management: Not Supported 00:22:49.562 Variable Capacity Management: Not Supported 00:22:49.562 Delete Endurance Group: Not Supported 00:22:49.562 Delete NVM Set: Not Supported 00:22:49.562 Extended LBA Formats Supported: Not Supported 00:22:49.562 Flexible Data Placement Supported: Not Supported 00:22:49.562 00:22:49.562 Controller Memory Buffer Support 00:22:49.562 ================================ 00:22:49.562 Supported: No 00:22:49.562 00:22:49.562 Persistent Memory Region Support 00:22:49.562 ================================ 00:22:49.562 Supported: No 00:22:49.562 00:22:49.562 Admin Command Set Attributes 00:22:49.562 ============================ 00:22:49.562 Security Send/Receive: Not Supported 00:22:49.562 Format NVM: Not Supported 00:22:49.562 Firmware Activate/Download: Not Supported 00:22:49.562 Namespace Management: Not Supported 00:22:49.562 Device Self-Test: Not Supported 00:22:49.562 Directives: Not Supported 00:22:49.562 NVMe-MI: Not Supported 00:22:49.562 Virtualization Management: Not Supported 00:22:49.562 Doorbell Buffer Config: Not Supported 00:22:49.562 Get LBA Status Capability: Not Supported 00:22:49.562 Command & Feature Lockdown Capability: Not Supported 00:22:49.562 Abort Command Limit: 1 00:22:49.562 Async Event Request Limit: 4 00:22:49.562 Number of Firmware Slots: N/A 00:22:49.562 Firmware Slot 1 Read-Only: N/A 00:22:49.562 Firmware Activation Without Reset: N/A 00:22:49.562 Multiple Update Detection Support: N/A 00:22:49.562 Firmware Update Granularity: No Information Provided 00:22:49.562 Per-Namespace SMART Log: No 00:22:49.562 Asymmetric Namespace Access Log Page: Not Supported 00:22:49.562 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:49.562 Command Effects Log Page: Not Supported 00:22:49.562 Get Log Page Extended Data: Supported 00:22:49.562 Telemetry Log Pages: Not Supported 00:22:49.562 Persistent Event Log Pages: Not Supported 00:22:49.562 Supported Log Pages Log Page: May Support 00:22:49.562 Commands Supported & Effects Log Page: Not Supported 00:22:49.562 Feature Identifiers & Effects Log Page:May Support 00:22:49.562 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.562 Data Area 4 for Telemetry Log: Not Supported 00:22:49.562 Error Log Page Entries Supported: 128 00:22:49.562 Keep Alive: Not Supported 00:22:49.562 00:22:49.562 NVM Command Set Attributes 00:22:49.562 ========================== 00:22:49.562 Submission Queue Entry Size 00:22:49.562 Max: 1 00:22:49.562 Min: 1 00:22:49.562 Completion Queue Entry Size 00:22:49.562 Max: 1 00:22:49.562 Min: 1 00:22:49.562 Number of Namespaces: 0 00:22:49.562 Compare Command: Not Supported 00:22:49.562 Write Uncorrectable Command: Not Supported 00:22:49.562 Dataset Management Command: Not Supported 00:22:49.562 Write Zeroes Command: Not Supported 00:22:49.562 Set Features Save Field: Not Supported 00:22:49.562 Reservations: Not Supported 00:22:49.562 Timestamp: Not Supported 00:22:49.562 Copy: Not Supported 00:22:49.562 Volatile Write Cache: Not Present 00:22:49.562 Atomic Write Unit (Normal): 1 00:22:49.562 Atomic Write Unit (PFail): 1 00:22:49.562 Atomic Compare & Write Unit: 1 00:22:49.562 Fused Compare & Write: Supported 00:22:49.562 Scatter-Gather List 00:22:49.562 SGL Command Set: Supported 00:22:49.562 SGL Keyed: Supported 00:22:49.562 SGL Bit Bucket Descriptor: Not Supported 00:22:49.562 SGL Metadata Pointer: Not Supported 00:22:49.562 Oversized SGL: Not Supported 00:22:49.562 SGL Metadata Address: Not Supported 00:22:49.562 SGL Offset: Supported 00:22:49.562 Transport SGL Data Block: Not Supported 00:22:49.562 Replay Protected Memory Block: Not Supported 00:22:49.562 00:22:49.562 Firmware Slot Information 00:22:49.562 ========================= 00:22:49.562 Active slot: 0 00:22:49.562 00:22:49.562 00:22:49.562 Error Log 00:22:49.562 ========= 00:22:49.562 00:22:49.562 Active Namespaces 00:22:49.562 ================= 00:22:49.562 Discovery Log Page 00:22:49.562 ================== 00:22:49.562 Generation Counter: 2 00:22:49.562 Number of Records: 2 00:22:49.562 Record Format: 0 00:22:49.562 00:22:49.562 Discovery Log Entry 0 00:22:49.562 ---------------------- 00:22:49.562 Transport Type: 3 (TCP) 00:22:49.562 Address Family: 1 (IPv4) 00:22:49.562 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:49.562 Entry Flags: 00:22:49.562 Duplicate Returned Information: 1 00:22:49.562 Explicit Persistent Connection Support for Discovery: 1 00:22:49.562 Transport Requirements: 00:22:49.562 Secure Channel: Not Required 00:22:49.562 Port ID: 0 (0x0000) 00:22:49.562 Controller ID: 65535 (0xffff) 00:22:49.562 Admin Max SQ Size: 128 00:22:49.562 Transport Service Identifier: 4420 00:22:49.562 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:49.562 Transport Address: 10.0.0.2 00:22:49.562 Discovery Log Entry 1 00:22:49.562 ---------------------- 00:22:49.562 Transport Type: 3 (TCP) 00:22:49.562 Address Family: 1 (IPv4) 00:22:49.562 Subsystem Type: 2 (NVM Subsystem) 00:22:49.562 Entry Flags: 00:22:49.562 Duplicate Returned Information: 0 00:22:49.562 Explicit Persistent Connection Support for Discovery: 0 00:22:49.562 Transport Requirements: 00:22:49.562 Secure Channel: Not Required 00:22:49.562 Port ID: 0 (0x0000) 00:22:49.562 Controller ID: 65535 (0xffff) 00:22:49.562 Admin Max SQ Size: 128 00:22:49.562 Transport Service Identifier: 4420 00:22:49.562 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:49.562 Transport Address: 10.0.0.2 [2024-11-19 09:24:50.414233] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:49.562 [2024-11-19 09:24:50.414244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd100) on tqpair=0x239b690 00:22:49.562 [2024-11-19 09:24:50.414251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.562 [2024-11-19 09:24:50.414255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd280) on tqpair=0x239b690 00:22:49.562 [2024-11-19 09:24:50.414259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.562 [2024-11-19 09:24:50.414264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd400) on tqpair=0x239b690 00:22:49.562 [2024-11-19 09:24:50.414268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.562 [2024-11-19 09:24:50.414272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.562 [2024-11-19 09:24:50.414276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.562 [2024-11-19 09:24:50.414284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.562 [2024-11-19 09:24:50.414288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.562 [2024-11-19 09:24:50.414291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.562 [2024-11-19 09:24:50.414298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-19 09:24:50.414312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.562 [2024-11-19 09:24:50.414375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.562 [2024-11-19 09:24:50.414381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.562 [2024-11-19 09:24:50.414385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.562 [2024-11-19 09:24:50.414388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.562 [2024-11-19 09:24:50.414397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.562 [2024-11-19 09:24:50.414401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.562 [2024-11-19 09:24:50.414404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.562 [2024-11-19 09:24:50.414410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.562 [2024-11-19 09:24:50.414423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.562 [2024-11-19 09:24:50.414508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.562 [2024-11-19 09:24:50.414514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.414519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.563 [2024-11-19 09:24:50.414528] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:49.563 [2024-11-19 09:24:50.414532] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:49.563 [2024-11-19 09:24:50.414540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.563 [2024-11-19 09:24:50.414553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.414562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.563 [2024-11-19 09:24:50.414628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.414634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.414637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.563 [2024-11-19 09:24:50.414650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.563 [2024-11-19 09:24:50.414663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.414672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.563 [2024-11-19 09:24:50.414736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.414742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.414745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.563 [2024-11-19 09:24:50.414756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.563 [2024-11-19 09:24:50.414768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.414778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.563 [2024-11-19 09:24:50.414838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.414844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.414847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.563 [2024-11-19 09:24:50.414859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.414866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.563 [2024-11-19 09:24:50.414871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.414880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.563 [2024-11-19 09:24:50.414944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.418955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.418960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.418964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.563 [2024-11-19 09:24:50.418974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.418979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.418982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x239b690) 00:22:49.563 [2024-11-19 09:24:50.418989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.419000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23fd580, cid 3, qid 0 00:22:49.563 [2024-11-19 09:24:50.419153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.419159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.419162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.419166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23fd580) on tqpair=0x239b690 00:22:49.563 [2024-11-19 09:24:50.419173] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:49.563 00:22:49.563 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:49.563 [2024-11-19 09:24:50.456553] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:49.563 [2024-11-19 09:24:50.456586] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194442 ] 00:22:49.563 [2024-11-19 09:24:50.497591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:49.563 [2024-11-19 09:24:50.497629] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:49.563 [2024-11-19 09:24:50.497634] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:49.563 [2024-11-19 09:24:50.497645] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:49.563 [2024-11-19 09:24:50.497652] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:49.563 [2024-11-19 09:24:50.501138] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:49.563 [2024-11-19 09:24:50.501171] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19e2690 0 00:22:49.563 [2024-11-19 09:24:50.508958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:49.563 [2024-11-19 09:24:50.508971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:49.563 [2024-11-19 09:24:50.508975] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:49.563 [2024-11-19 09:24:50.508978] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:49.563 [2024-11-19 09:24:50.509003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.509008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.509012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.563 [2024-11-19 09:24:50.509024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:49.563 [2024-11-19 09:24:50.509041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.563 [2024-11-19 09:24:50.516957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.516966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.516969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.516973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.563 [2024-11-19 09:24:50.516983] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:49.563 [2024-11-19 09:24:50.516989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:49.563 [2024-11-19 09:24:50.516994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:49.563 [2024-11-19 09:24:50.517004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.517008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.517011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.563 [2024-11-19 09:24:50.517018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.517031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.563 [2024-11-19 09:24:50.517116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.517122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.517125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.517128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.563 [2024-11-19 09:24:50.517133] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:49.563 [2024-11-19 09:24:50.517139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:49.563 [2024-11-19 09:24:50.517146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.517149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.517152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.563 [2024-11-19 09:24:50.517158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.563 [2024-11-19 09:24:50.517168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.563 [2024-11-19 09:24:50.517231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.563 [2024-11-19 09:24:50.517237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.563 [2024-11-19 09:24:50.517240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.563 [2024-11-19 09:24:50.517244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.563 [2024-11-19 09:24:50.517248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:49.563 [2024-11-19 09:24:50.517255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:49.563 [2024-11-19 09:24:50.517260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.517273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.564 [2024-11-19 09:24:50.517285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.564 [2024-11-19 09:24:50.517348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.564 [2024-11-19 09:24:50.517354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.564 [2024-11-19 09:24:50.517357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.564 [2024-11-19 09:24:50.517365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:49.564 [2024-11-19 09:24:50.517373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.517385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.564 [2024-11-19 09:24:50.517395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.564 [2024-11-19 09:24:50.517458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.564 [2024-11-19 09:24:50.517464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.564 [2024-11-19 09:24:50.517467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.564 [2024-11-19 09:24:50.517474] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:49.564 [2024-11-19 09:24:50.517478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:49.564 [2024-11-19 09:24:50.517485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:49.564 [2024-11-19 09:24:50.517593] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:49.564 [2024-11-19 09:24:50.517597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:49.564 [2024-11-19 09:24:50.517603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.517616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.564 [2024-11-19 09:24:50.517626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.564 [2024-11-19 09:24:50.517688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.564 [2024-11-19 09:24:50.517694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.564 [2024-11-19 09:24:50.517697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.564 [2024-11-19 09:24:50.517705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:49.564 [2024-11-19 09:24:50.517713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.517729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.564 [2024-11-19 09:24:50.517739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.564 [2024-11-19 09:24:50.517808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.564 [2024-11-19 09:24:50.517814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.564 [2024-11-19 09:24:50.517817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.564 [2024-11-19 09:24:50.517824] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:49.564 [2024-11-19 09:24:50.517828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:49.564 [2024-11-19 09:24:50.517835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:49.564 [2024-11-19 09:24:50.517845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:49.564 [2024-11-19 09:24:50.517852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.517862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.564 [2024-11-19 09:24:50.517871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.564 [2024-11-19 09:24:50.517972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.564 [2024-11-19 09:24:50.517979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.564 [2024-11-19 09:24:50.517982] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.517985] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=4096, cccid=0 00:22:49.564 [2024-11-19 09:24:50.517989] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44100) on tqpair(0x19e2690): expected_datao=0, payload_size=4096 00:22:49.564 [2024-11-19 09:24:50.518002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518015] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518019] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.564 [2024-11-19 09:24:50.518058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.564 [2024-11-19 09:24:50.518061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.564 [2024-11-19 09:24:50.518071] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:49.564 [2024-11-19 09:24:50.518076] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:49.564 [2024-11-19 09:24:50.518080] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:49.564 [2024-11-19 09:24:50.518083] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:49.564 [2024-11-19 09:24:50.518090] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:49.564 [2024-11-19 09:24:50.518094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:49.564 [2024-11-19 09:24:50.518101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:49.564 [2024-11-19 09:24:50.518109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.518122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.564 [2024-11-19 09:24:50.518133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.564 [2024-11-19 09:24:50.518197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.564 [2024-11-19 09:24:50.518203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.564 [2024-11-19 09:24:50.518206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.564 [2024-11-19 09:24:50.518217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518220] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.518229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.564 [2024-11-19 09:24:50.518234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.518245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.564 [2024-11-19 09:24:50.518250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.518262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.564 [2024-11-19 09:24:50.518267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.564 [2024-11-19 09:24:50.518278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.564 [2024-11-19 09:24:50.518282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:49.564 [2024-11-19 09:24:50.518290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:49.564 [2024-11-19 09:24:50.518296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.564 [2024-11-19 09:24:50.518299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.565 [2024-11-19 09:24:50.518304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.565 [2024-11-19 09:24:50.518315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44100, cid 0, qid 0 00:22:49.565 [2024-11-19 09:24:50.518320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44280, cid 1, qid 0 00:22:49.565 [2024-11-19 09:24:50.518324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44400, cid 2, qid 0 00:22:49.565 [2024-11-19 09:24:50.518329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.565 [2024-11-19 09:24:50.518334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.565 [2024-11-19 09:24:50.518432] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.565 [2024-11-19 09:24:50.518438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.565 [2024-11-19 09:24:50.518441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.565 [2024-11-19 09:24:50.518450] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:49.565 [2024-11-19 09:24:50.518455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.518462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.518468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.518474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.565 [2024-11-19 09:24:50.518486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:49.565 [2024-11-19 09:24:50.518495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.565 [2024-11-19 09:24:50.518556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.565 [2024-11-19 09:24:50.518562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.565 [2024-11-19 09:24:50.518565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.565 [2024-11-19 09:24:50.518621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.518631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.518638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.565 [2024-11-19 09:24:50.518647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.565 [2024-11-19 09:24:50.518657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.565 [2024-11-19 09:24:50.518732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.565 [2024-11-19 09:24:50.518738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.565 [2024-11-19 09:24:50.518741] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=4096, cccid=4 00:22:49.565 [2024-11-19 09:24:50.518748] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44700) on tqpair(0x19e2690): expected_datao=0, payload_size=4096 00:22:49.565 [2024-11-19 09:24:50.518752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518763] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.518767] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.559018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.565 [2024-11-19 09:24:50.559032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.565 [2024-11-19 09:24:50.559036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.559039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.565 [2024-11-19 09:24:50.559050] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:49.565 [2024-11-19 09:24:50.559061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.559072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.559079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.559083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.565 [2024-11-19 09:24:50.559090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.565 [2024-11-19 09:24:50.559102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.565 [2024-11-19 09:24:50.559189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.565 [2024-11-19 09:24:50.559195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.565 [2024-11-19 09:24:50.559199] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.559202] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=4096, cccid=4 00:22:49.565 [2024-11-19 09:24:50.559206] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44700) on tqpair(0x19e2690): expected_datao=0, payload_size=4096 00:22:49.565 [2024-11-19 09:24:50.559210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.559222] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.559226] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.603957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.565 [2024-11-19 09:24:50.603969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.565 [2024-11-19 09:24:50.603973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.603976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.565 [2024-11-19 09:24:50.603991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.565 [2024-11-19 09:24:50.604019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.565 [2024-11-19 09:24:50.604032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.565 [2024-11-19 09:24:50.604143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.565 [2024-11-19 09:24:50.604148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.565 [2024-11-19 09:24:50.604151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604155] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=4096, cccid=4 00:22:49.565 [2024-11-19 09:24:50.604159] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44700) on tqpair(0x19e2690): expected_datao=0, payload_size=4096 00:22:49.565 [2024-11-19 09:24:50.604165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604171] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604174] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.565 [2024-11-19 09:24:50.604194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.565 [2024-11-19 09:24:50.604197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.565 [2024-11-19 09:24:50.604207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604241] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:49.565 [2024-11-19 09:24:50.604245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:49.565 [2024-11-19 09:24:50.604250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:49.565 [2024-11-19 09:24:50.604263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.565 [2024-11-19 09:24:50.604273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.565 [2024-11-19 09:24:50.604279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.565 [2024-11-19 09:24:50.604285] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.566 [2024-11-19 09:24:50.604303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.566 [2024-11-19 09:24:50.604308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44880, cid 5, qid 0 00:22:49.566 [2024-11-19 09:24:50.604393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.566 [2024-11-19 09:24:50.604399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.566 [2024-11-19 09:24:50.604402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.566 [2024-11-19 09:24:50.604410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.566 [2024-11-19 09:24:50.604415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.566 [2024-11-19 09:24:50.604418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44880) on tqpair=0x19e2690 00:22:49.566 [2024-11-19 09:24:50.604431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44880, cid 5, qid 0 00:22:49.566 [2024-11-19 09:24:50.604521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.566 [2024-11-19 09:24:50.604526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.566 [2024-11-19 09:24:50.604529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44880) on tqpair=0x19e2690 00:22:49.566 [2024-11-19 09:24:50.604540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44880, cid 5, qid 0 00:22:49.566 [2024-11-19 09:24:50.604633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.566 [2024-11-19 09:24:50.604639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.566 [2024-11-19 09:24:50.604642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44880) on tqpair=0x19e2690 00:22:49.566 [2024-11-19 09:24:50.604654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44880, cid 5, qid 0 00:22:49.566 [2024-11-19 09:24:50.604736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.566 [2024-11-19 09:24:50.604741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.566 [2024-11-19 09:24:50.604744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44880) on tqpair=0x19e2690 00:22:49.566 [2024-11-19 09:24:50.604762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.604814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19e2690) 00:22:49.566 [2024-11-19 09:24:50.604820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.566 [2024-11-19 09:24:50.604831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44880, cid 5, qid 0 00:22:49.566 [2024-11-19 09:24:50.604835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44700, cid 4, qid 0 00:22:49.566 [2024-11-19 09:24:50.604839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44a00, cid 6, qid 0 00:22:49.566 [2024-11-19 09:24:50.604843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44b80, cid 7, qid 0 00:22:49.566 [2024-11-19 09:24:50.604995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.566 [2024-11-19 09:24:50.605002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.566 [2024-11-19 09:24:50.605005] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605008] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=8192, cccid=5 00:22:49.566 [2024-11-19 09:24:50.605012] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44880) on tqpair(0x19e2690): expected_datao=0, payload_size=8192 00:22:49.566 [2024-11-19 09:24:50.605016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605029] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605033] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.566 [2024-11-19 09:24:50.605043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.566 [2024-11-19 09:24:50.605046] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605049] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=512, cccid=4 00:22:49.566 [2024-11-19 09:24:50.605053] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44700) on tqpair(0x19e2690): expected_datao=0, payload_size=512 00:22:49.566 [2024-11-19 09:24:50.605056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605062] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605065] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.566 [2024-11-19 09:24:50.605075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.566 [2024-11-19 09:24:50.605078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605081] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=512, cccid=6 00:22:49.566 [2024-11-19 09:24:50.605085] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44a00) on tqpair(0x19e2690): expected_datao=0, payload_size=512 00:22:49.566 [2024-11-19 09:24:50.605088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605093] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605097] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605101] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:49.566 [2024-11-19 09:24:50.605106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:49.566 [2024-11-19 09:24:50.605109] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605112] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19e2690): datao=0, datal=4096, cccid=7 00:22:49.566 [2024-11-19 09:24:50.605116] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a44b80) on tqpair(0x19e2690): expected_datao=0, payload_size=4096 00:22:49.566 [2024-11-19 09:24:50.605121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605127] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:49.566 [2024-11-19 09:24:50.605130] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:49.827 [2024-11-19 09:24:50.646040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.827 [2024-11-19 09:24:50.646056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.827 [2024-11-19 09:24:50.646060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.827 [2024-11-19 09:24:50.646064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44880) on tqpair=0x19e2690 00:22:49.827 [2024-11-19 09:24:50.646077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.827 [2024-11-19 09:24:50.646082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.827 [2024-11-19 09:24:50.646085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.827 [2024-11-19 09:24:50.646089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44700) on tqpair=0x19e2690 00:22:49.827 [2024-11-19 09:24:50.646097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.827 [2024-11-19 09:24:50.646102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.827 [2024-11-19 09:24:50.646105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.827 [2024-11-19 09:24:50.646109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44a00) on tqpair=0x19e2690 00:22:49.827 [2024-11-19 09:24:50.646115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.827 [2024-11-19 09:24:50.646120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.827 [2024-11-19 09:24:50.646123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.827 [2024-11-19 09:24:50.646126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44b80) on tqpair=0x19e2690 00:22:49.827 ===================================================== 00:22:49.827 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.827 ===================================================== 00:22:49.827 Controller Capabilities/Features 00:22:49.827 ================================ 00:22:49.827 Vendor ID: 8086 00:22:49.827 Subsystem Vendor ID: 8086 00:22:49.827 Serial Number: SPDK00000000000001 00:22:49.827 Model Number: SPDK bdev Controller 00:22:49.827 Firmware Version: 25.01 00:22:49.827 Recommended Arb Burst: 6 00:22:49.827 IEEE OUI Identifier: e4 d2 5c 00:22:49.827 Multi-path I/O 00:22:49.827 May have multiple subsystem ports: Yes 00:22:49.827 May have multiple controllers: Yes 00:22:49.827 Associated with SR-IOV VF: No 00:22:49.827 Max Data Transfer Size: 131072 00:22:49.827 Max Number of Namespaces: 32 00:22:49.827 Max Number of I/O Queues: 127 00:22:49.827 NVMe Specification Version (VS): 1.3 00:22:49.827 NVMe Specification Version (Identify): 1.3 00:22:49.827 Maximum Queue Entries: 128 00:22:49.827 Contiguous Queues Required: Yes 00:22:49.827 Arbitration Mechanisms Supported 00:22:49.827 Weighted Round Robin: Not Supported 00:22:49.827 Vendor Specific: Not Supported 00:22:49.827 Reset Timeout: 15000 ms 00:22:49.827 Doorbell Stride: 4 bytes 00:22:49.827 NVM Subsystem Reset: Not Supported 00:22:49.827 Command Sets Supported 00:22:49.827 NVM Command Set: Supported 00:22:49.827 Boot Partition: Not Supported 00:22:49.827 Memory Page Size Minimum: 4096 bytes 00:22:49.827 Memory Page Size Maximum: 4096 bytes 00:22:49.827 Persistent Memory Region: Not Supported 00:22:49.827 Optional Asynchronous Events Supported 00:22:49.827 Namespace Attribute Notices: Supported 00:22:49.827 Firmware Activation Notices: Not Supported 00:22:49.827 ANA Change Notices: Not Supported 00:22:49.827 PLE Aggregate Log Change Notices: Not Supported 00:22:49.827 LBA Status Info Alert Notices: Not Supported 00:22:49.827 EGE Aggregate Log Change Notices: Not Supported 00:22:49.827 Normal NVM Subsystem Shutdown event: Not Supported 00:22:49.827 Zone Descriptor Change Notices: Not Supported 00:22:49.827 Discovery Log Change Notices: Not Supported 00:22:49.827 Controller Attributes 00:22:49.827 128-bit Host Identifier: Supported 00:22:49.827 Non-Operational Permissive Mode: Not Supported 00:22:49.827 NVM Sets: Not Supported 00:22:49.827 Read Recovery Levels: Not Supported 00:22:49.827 Endurance Groups: Not Supported 00:22:49.827 Predictable Latency Mode: Not Supported 00:22:49.827 Traffic Based Keep ALive: Not Supported 00:22:49.827 Namespace Granularity: Not Supported 00:22:49.827 SQ Associations: Not Supported 00:22:49.827 UUID List: Not Supported 00:22:49.827 Multi-Domain Subsystem: Not Supported 00:22:49.827 Fixed Capacity Management: Not Supported 00:22:49.827 Variable Capacity Management: Not Supported 00:22:49.827 Delete Endurance Group: Not Supported 00:22:49.827 Delete NVM Set: Not Supported 00:22:49.827 Extended LBA Formats Supported: Not Supported 00:22:49.827 Flexible Data Placement Supported: Not Supported 00:22:49.827 00:22:49.827 Controller Memory Buffer Support 00:22:49.827 ================================ 00:22:49.827 Supported: No 00:22:49.827 00:22:49.827 Persistent Memory Region Support 00:22:49.827 ================================ 00:22:49.827 Supported: No 00:22:49.827 00:22:49.827 Admin Command Set Attributes 00:22:49.827 ============================ 00:22:49.827 Security Send/Receive: Not Supported 00:22:49.827 Format NVM: Not Supported 00:22:49.827 Firmware Activate/Download: Not Supported 00:22:49.827 Namespace Management: Not Supported 00:22:49.827 Device Self-Test: Not Supported 00:22:49.827 Directives: Not Supported 00:22:49.827 NVMe-MI: Not Supported 00:22:49.827 Virtualization Management: Not Supported 00:22:49.827 Doorbell Buffer Config: Not Supported 00:22:49.827 Get LBA Status Capability: Not Supported 00:22:49.827 Command & Feature Lockdown Capability: Not Supported 00:22:49.827 Abort Command Limit: 4 00:22:49.827 Async Event Request Limit: 4 00:22:49.827 Number of Firmware Slots: N/A 00:22:49.827 Firmware Slot 1 Read-Only: N/A 00:22:49.827 Firmware Activation Without Reset: N/A 00:22:49.827 Multiple Update Detection Support: N/A 00:22:49.827 Firmware Update Granularity: No Information Provided 00:22:49.827 Per-Namespace SMART Log: No 00:22:49.827 Asymmetric Namespace Access Log Page: Not Supported 00:22:49.827 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:49.827 Command Effects Log Page: Supported 00:22:49.827 Get Log Page Extended Data: Supported 00:22:49.827 Telemetry Log Pages: Not Supported 00:22:49.827 Persistent Event Log Pages: Not Supported 00:22:49.827 Supported Log Pages Log Page: May Support 00:22:49.827 Commands Supported & Effects Log Page: Not Supported 00:22:49.828 Feature Identifiers & Effects Log Page:May Support 00:22:49.828 NVMe-MI Commands & Effects Log Page: May Support 00:22:49.828 Data Area 4 for Telemetry Log: Not Supported 00:22:49.828 Error Log Page Entries Supported: 128 00:22:49.828 Keep Alive: Supported 00:22:49.828 Keep Alive Granularity: 10000 ms 00:22:49.828 00:22:49.828 NVM Command Set Attributes 00:22:49.828 ========================== 00:22:49.828 Submission Queue Entry Size 00:22:49.828 Max: 64 00:22:49.828 Min: 64 00:22:49.828 Completion Queue Entry Size 00:22:49.828 Max: 16 00:22:49.828 Min: 16 00:22:49.828 Number of Namespaces: 32 00:22:49.828 Compare Command: Supported 00:22:49.828 Write Uncorrectable Command: Not Supported 00:22:49.828 Dataset Management Command: Supported 00:22:49.828 Write Zeroes Command: Supported 00:22:49.828 Set Features Save Field: Not Supported 00:22:49.828 Reservations: Supported 00:22:49.828 Timestamp: Not Supported 00:22:49.828 Copy: Supported 00:22:49.828 Volatile Write Cache: Present 00:22:49.828 Atomic Write Unit (Normal): 1 00:22:49.828 Atomic Write Unit (PFail): 1 00:22:49.828 Atomic Compare & Write Unit: 1 00:22:49.828 Fused Compare & Write: Supported 00:22:49.828 Scatter-Gather List 00:22:49.828 SGL Command Set: Supported 00:22:49.828 SGL Keyed: Supported 00:22:49.828 SGL Bit Bucket Descriptor: Not Supported 00:22:49.828 SGL Metadata Pointer: Not Supported 00:22:49.828 Oversized SGL: Not Supported 00:22:49.828 SGL Metadata Address: Not Supported 00:22:49.828 SGL Offset: Supported 00:22:49.828 Transport SGL Data Block: Not Supported 00:22:49.828 Replay Protected Memory Block: Not Supported 00:22:49.828 00:22:49.828 Firmware Slot Information 00:22:49.828 ========================= 00:22:49.828 Active slot: 1 00:22:49.828 Slot 1 Firmware Revision: 25.01 00:22:49.828 00:22:49.828 00:22:49.828 Commands Supported and Effects 00:22:49.828 ============================== 00:22:49.828 Admin Commands 00:22:49.828 -------------- 00:22:49.828 Get Log Page (02h): Supported 00:22:49.828 Identify (06h): Supported 00:22:49.828 Abort (08h): Supported 00:22:49.828 Set Features (09h): Supported 00:22:49.828 Get Features (0Ah): Supported 00:22:49.828 Asynchronous Event Request (0Ch): Supported 00:22:49.828 Keep Alive (18h): Supported 00:22:49.828 I/O Commands 00:22:49.828 ------------ 00:22:49.828 Flush (00h): Supported LBA-Change 00:22:49.828 Write (01h): Supported LBA-Change 00:22:49.828 Read (02h): Supported 00:22:49.828 Compare (05h): Supported 00:22:49.828 Write Zeroes (08h): Supported LBA-Change 00:22:49.828 Dataset Management (09h): Supported LBA-Change 00:22:49.828 Copy (19h): Supported LBA-Change 00:22:49.828 00:22:49.828 Error Log 00:22:49.828 ========= 00:22:49.828 00:22:49.828 Arbitration 00:22:49.828 =========== 00:22:49.828 Arbitration Burst: 1 00:22:49.828 00:22:49.828 Power Management 00:22:49.828 ================ 00:22:49.828 Number of Power States: 1 00:22:49.828 Current Power State: Power State #0 00:22:49.828 Power State #0: 00:22:49.828 Max Power: 0.00 W 00:22:49.828 Non-Operational State: Operational 00:22:49.828 Entry Latency: Not Reported 00:22:49.828 Exit Latency: Not Reported 00:22:49.828 Relative Read Throughput: 0 00:22:49.828 Relative Read Latency: 0 00:22:49.828 Relative Write Throughput: 0 00:22:49.828 Relative Write Latency: 0 00:22:49.828 Idle Power: Not Reported 00:22:49.828 Active Power: Not Reported 00:22:49.828 Non-Operational Permissive Mode: Not Supported 00:22:49.828 00:22:49.828 Health Information 00:22:49.828 ================== 00:22:49.828 Critical Warnings: 00:22:49.828 Available Spare Space: OK 00:22:49.828 Temperature: OK 00:22:49.828 Device Reliability: OK 00:22:49.828 Read Only: No 00:22:49.828 Volatile Memory Backup: OK 00:22:49.828 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:49.828 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:49.828 Available Spare: 0% 00:22:49.828 Available Spare Threshold: 0% 00:22:49.828 Life Percentage Used:[2024-11-19 09:24:50.646213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19e2690) 00:22:49.828 [2024-11-19 09:24:50.646226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.828 [2024-11-19 09:24:50.646240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44b80, cid 7, qid 0 00:22:49.828 [2024-11-19 09:24:50.646305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.828 [2024-11-19 09:24:50.646311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.828 [2024-11-19 09:24:50.646314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44b80) on tqpair=0x19e2690 00:22:49.828 [2024-11-19 09:24:50.646348] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:49.828 [2024-11-19 09:24:50.646357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44100) on tqpair=0x19e2690 00:22:49.828 [2024-11-19 09:24:50.646362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.828 [2024-11-19 09:24:50.646367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44280) on tqpair=0x19e2690 00:22:49.828 [2024-11-19 09:24:50.646371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.828 [2024-11-19 09:24:50.646375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44400) on tqpair=0x19e2690 00:22:49.828 [2024-11-19 09:24:50.646379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.828 [2024-11-19 09:24:50.646384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.828 [2024-11-19 09:24:50.646388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.828 [2024-11-19 09:24:50.646396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.828 [2024-11-19 09:24:50.646409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.828 [2024-11-19 09:24:50.646421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.828 [2024-11-19 09:24:50.646487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.828 [2024-11-19 09:24:50.646493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.828 [2024-11-19 09:24:50.646496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.828 [2024-11-19 09:24:50.646506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.828 [2024-11-19 09:24:50.646512] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.828 [2024-11-19 09:24:50.646518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.828 [2024-11-19 09:24:50.646530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.829 [2024-11-19 09:24:50.646603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.829 [2024-11-19 09:24:50.646609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.829 [2024-11-19 09:24:50.646612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.829 [2024-11-19 09:24:50.646619] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:49.829 [2024-11-19 09:24:50.646623] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:49.829 [2024-11-19 09:24:50.646631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.829 [2024-11-19 09:24:50.646644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.829 [2024-11-19 09:24:50.646653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.829 [2024-11-19 09:24:50.646718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.829 [2024-11-19 09:24:50.646724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.829 [2024-11-19 09:24:50.646727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.829 [2024-11-19 09:24:50.646738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.829 [2024-11-19 09:24:50.646750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.829 [2024-11-19 09:24:50.646760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.829 [2024-11-19 09:24:50.646829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.829 [2024-11-19 09:24:50.646834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.829 [2024-11-19 09:24:50.646837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.829 [2024-11-19 09:24:50.646850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.829 [2024-11-19 09:24:50.646863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.829 [2024-11-19 09:24:50.646872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.829 [2024-11-19 09:24:50.646934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.829 [2024-11-19 09:24:50.646940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.829 [2024-11-19 09:24:50.646942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.646946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.829 [2024-11-19 09:24:50.650964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.650968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.650971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19e2690) 00:22:49.829 [2024-11-19 09:24:50.650977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.829 [2024-11-19 09:24:50.650988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a44580, cid 3, qid 0 00:22:49.829 [2024-11-19 09:24:50.651059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:49.829 [2024-11-19 09:24:50.651065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:49.829 [2024-11-19 09:24:50.651068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:49.829 [2024-11-19 09:24:50.651072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a44580) on tqpair=0x19e2690 00:22:49.829 [2024-11-19 09:24:50.651078] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:49.829 0% 00:22:49.829 Data Units Read: 0 00:22:49.829 Data Units Written: 0 00:22:49.829 Host Read Commands: 0 00:22:49.829 Host Write Commands: 0 00:22:49.829 Controller Busy Time: 0 minutes 00:22:49.829 Power Cycles: 0 00:22:49.829 Power On Hours: 0 hours 00:22:49.829 Unsafe Shutdowns: 0 00:22:49.829 Unrecoverable Media Errors: 0 00:22:49.829 Lifetime Error Log Entries: 0 00:22:49.829 Warning Temperature Time: 0 minutes 00:22:49.829 Critical Temperature Time: 0 minutes 00:22:49.829 00:22:49.829 Number of Queues 00:22:49.829 ================ 00:22:49.829 Number of I/O Submission Queues: 127 00:22:49.829 Number of I/O Completion Queues: 127 00:22:49.829 00:22:49.829 Active Namespaces 00:22:49.829 ================= 00:22:49.829 Namespace ID:1 00:22:49.829 Error Recovery Timeout: Unlimited 00:22:49.829 Command Set Identifier: NVM (00h) 00:22:49.829 Deallocate: Supported 00:22:49.829 Deallocated/Unwritten Error: Not Supported 00:22:49.829 Deallocated Read Value: Unknown 00:22:49.829 Deallocate in Write Zeroes: Not Supported 00:22:49.829 Deallocated Guard Field: 0xFFFF 00:22:49.829 Flush: Supported 00:22:49.829 Reservation: Supported 00:22:49.829 Namespace Sharing Capabilities: Multiple Controllers 00:22:49.829 Size (in LBAs): 131072 (0GiB) 00:22:49.829 Capacity (in LBAs): 131072 (0GiB) 00:22:49.829 Utilization (in LBAs): 131072 (0GiB) 00:22:49.829 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:49.829 EUI64: ABCDEF0123456789 00:22:49.829 UUID: 96da04ee-1452-4538-99d8-27e9fc089173 00:22:49.829 Thin Provisioning: Not Supported 00:22:49.829 Per-NS Atomic Units: Yes 00:22:49.829 Atomic Boundary Size (Normal): 0 00:22:49.829 Atomic Boundary Size (PFail): 0 00:22:49.829 Atomic Boundary Offset: 0 00:22:49.829 Maximum Single Source Range Length: 65535 00:22:49.829 Maximum Copy Length: 65535 00:22:49.829 Maximum Source Range Count: 1 00:22:49.829 NGUID/EUI64 Never Reused: No 00:22:49.829 Namespace Write Protected: No 00:22:49.829 Number of LBA Formats: 1 00:22:49.829 Current LBA Format: LBA Format #00 00:22:49.829 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:49.829 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.829 rmmod nvme_tcp 00:22:49.829 rmmod nvme_fabrics 00:22:49.829 rmmod nvme_keyring 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:49.829 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1194269 ']' 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1194269 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1194269 ']' 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1194269 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1194269 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1194269' 00:22:49.830 killing process with pid 1194269 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1194269 00:22:49.830 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1194269 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.089 09:24:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.996 09:24:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.255 00:22:52.255 real 0m10.065s 00:22:52.255 user 0m8.434s 00:22:52.255 sys 0m4.894s 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:52.255 ************************************ 00:22:52.255 END TEST nvmf_identify 00:22:52.255 ************************************ 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.255 ************************************ 00:22:52.255 START TEST nvmf_perf 00:22:52.255 ************************************ 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:52.255 * Looking for test storage... 00:22:52.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.255 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.256 --rc genhtml_branch_coverage=1 00:22:52.256 --rc genhtml_function_coverage=1 00:22:52.256 --rc genhtml_legend=1 00:22:52.256 --rc geninfo_all_blocks=1 00:22:52.256 --rc geninfo_unexecuted_blocks=1 00:22:52.256 00:22:52.256 ' 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.256 --rc genhtml_branch_coverage=1 00:22:52.256 --rc genhtml_function_coverage=1 00:22:52.256 --rc genhtml_legend=1 00:22:52.256 --rc geninfo_all_blocks=1 00:22:52.256 --rc geninfo_unexecuted_blocks=1 00:22:52.256 00:22:52.256 ' 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.256 --rc genhtml_branch_coverage=1 00:22:52.256 --rc genhtml_function_coverage=1 00:22:52.256 --rc genhtml_legend=1 00:22:52.256 --rc geninfo_all_blocks=1 00:22:52.256 --rc geninfo_unexecuted_blocks=1 00:22:52.256 00:22:52.256 ' 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:52.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.256 --rc genhtml_branch_coverage=1 00:22:52.256 --rc genhtml_function_coverage=1 00:22:52.256 --rc genhtml_legend=1 00:22:52.256 --rc geninfo_all_blocks=1 00:22:52.256 --rc geninfo_unexecuted_blocks=1 00:22:52.256 00:22:52.256 ' 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.256 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.516 09:24:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.932 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.932 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.932 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.933 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.192 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.192 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.192 09:24:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:22:58.192 00:22:58.192 --- 10.0.0.2 ping statistics --- 00:22:58.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.192 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:58.192 00:22:58.192 --- 10.0.0.1 ping statistics --- 00:22:58.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.192 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.192 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1198069 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1198069 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1198069 ']' 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:58.450 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:58.450 [2024-11-19 09:24:59.324128] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:22:58.450 [2024-11-19 09:24:59.324181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.450 [2024-11-19 09:24:59.404602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.450 [2024-11-19 09:24:59.448063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.450 [2024-11-19 09:24:59.448102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.450 [2024-11-19 09:24:59.448110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.450 [2024-11-19 09:24:59.448116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.451 [2024-11-19 09:24:59.448120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.451 [2024-11-19 09:24:59.449576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.451 [2024-11-19 09:24:59.449688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.451 [2024-11-19 09:24:59.449793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.451 [2024-11-19 09:24:59.449795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:58.710 09:24:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:01.996 09:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:01.996 09:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:01.996 09:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:01.996 09:25:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:01.996 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:01.996 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:01.996 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:02.255 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:02.255 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.255 [2024-11-19 09:25:03.219118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.255 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.513 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:02.513 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.771 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:02.771 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:03.030 09:25:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.030 [2024-11-19 09:25:04.051697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.030 09:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:03.289 09:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:03.289 09:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:03.289 09:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:03.289 09:25:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:04.666 Initializing NVMe Controllers 00:23:04.666 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:04.666 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:04.666 Initialization complete. Launching workers. 00:23:04.666 ======================================================== 00:23:04.666 Latency(us) 00:23:04.666 Device Information : IOPS MiB/s Average min max 00:23:04.666 PCIE (0000:5e:00.0) NSID 1 from core 0: 97336.83 380.22 328.34 31.80 5227.06 00:23:04.666 ======================================================== 00:23:04.666 Total : 97336.83 380.22 328.34 31.80 5227.06 00:23:04.666 00:23:04.666 09:25:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:06.043 Initializing NVMe Controllers 00:23:06.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.043 Initialization complete. Launching workers. 00:23:06.043 ======================================================== 00:23:06.043 Latency(us) 00:23:06.043 Device Information : IOPS MiB/s Average min max 00:23:06.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 89.68 0.35 11277.25 109.28 44914.13 00:23:06.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.78 0.24 16961.42 6980.11 48876.39 00:23:06.043 ======================================================== 00:23:06.043 Total : 151.46 0.59 13595.79 109.28 48876.39 00:23:06.043 00:23:06.043 09:25:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.429 Initializing NVMe Controllers 00:23:07.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:07.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:07.429 Initialization complete. Launching workers. 00:23:07.429 ======================================================== 00:23:07.429 Latency(us) 00:23:07.429 Device Information : IOPS MiB/s Average min max 00:23:07.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10807.00 42.21 2961.84 446.27 10156.43 00:23:07.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3871.00 15.12 8316.55 6257.20 19300.09 00:23:07.429 ======================================================== 00:23:07.429 Total : 14678.00 57.34 4374.03 446.27 19300.09 00:23:07.429 00:23:07.429 09:25:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:07.429 09:25:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:07.429 09:25:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.959 Initializing NVMe Controllers 00:23:09.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.959 Controller IO queue size 128, less than required. 00:23:09.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.959 Controller IO queue size 128, less than required. 00:23:09.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:09.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:09.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:09.959 Initialization complete. Launching workers. 00:23:09.959 ======================================================== 00:23:09.959 Latency(us) 00:23:09.959 Device Information : IOPS MiB/s Average min max 00:23:09.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1740.39 435.10 74777.90 41634.80 129121.59 00:23:09.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 637.96 159.49 213344.86 71548.48 326236.39 00:23:09.959 ======================================================== 00:23:09.959 Total : 2378.34 594.59 111946.59 41634.80 326236.39 00:23:09.959 00:23:09.959 09:25:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:10.218 No valid NVMe controllers or AIO or URING devices found 00:23:10.218 Initializing NVMe Controllers 00:23:10.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.218 Controller IO queue size 128, less than required. 00:23:10.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.218 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:10.218 Controller IO queue size 128, less than required. 00:23:10.218 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.218 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:10.218 WARNING: Some requested NVMe devices were skipped 00:23:10.218 09:25:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:12.752 Initializing NVMe Controllers 00:23:12.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.752 Controller IO queue size 128, less than required. 00:23:12.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.752 Controller IO queue size 128, less than required. 00:23:12.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:12.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:12.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:12.752 Initialization complete. Launching workers. 00:23:12.752 00:23:12.752 ==================== 00:23:12.752 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:12.752 TCP transport: 00:23:12.752 polls: 10792 00:23:12.752 idle_polls: 7428 00:23:12.752 sock_completions: 3364 00:23:12.752 nvme_completions: 6247 00:23:12.752 submitted_requests: 9340 00:23:12.752 queued_requests: 1 00:23:12.752 00:23:12.752 ==================== 00:23:12.752 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:12.752 TCP transport: 00:23:12.752 polls: 10906 00:23:12.752 idle_polls: 7190 00:23:12.752 sock_completions: 3716 00:23:12.752 nvme_completions: 6697 00:23:12.752 submitted_requests: 10098 00:23:12.752 queued_requests: 1 00:23:12.752 ======================================================== 00:23:12.752 Latency(us) 00:23:12.752 Device Information : IOPS MiB/s Average min max 00:23:12.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1561.25 390.31 83609.61 55892.55 143299.50 00:23:12.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1673.74 418.43 77512.32 47847.52 139822.49 00:23:12.752 ======================================================== 00:23:12.752 Total : 3234.99 808.75 80454.96 47847.52 143299.50 00:23:12.752 00:23:13.011 09:25:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:13.011 09:25:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.011 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.011 rmmod nvme_tcp 00:23:13.269 rmmod nvme_fabrics 00:23:13.269 rmmod nvme_keyring 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1198069 ']' 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1198069 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1198069 ']' 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1198069 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:13.269 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1198069 00:23:13.270 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:13.270 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:13.270 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1198069' 00:23:13.270 killing process with pid 1198069 00:23:13.270 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1198069 00:23:13.270 09:25:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1198069 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.646 09:25:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.185 00:23:17.185 real 0m24.554s 00:23:17.185 user 1m4.236s 00:23:17.185 sys 0m8.311s 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.185 ************************************ 00:23:17.185 END TEST nvmf_perf 00:23:17.185 ************************************ 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.185 ************************************ 00:23:17.185 START TEST nvmf_fio_host 00:23:17.185 ************************************ 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:17.185 * Looking for test storage... 00:23:17.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.185 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:17.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.186 --rc genhtml_branch_coverage=1 00:23:17.186 --rc genhtml_function_coverage=1 00:23:17.186 --rc genhtml_legend=1 00:23:17.186 --rc geninfo_all_blocks=1 00:23:17.186 --rc geninfo_unexecuted_blocks=1 00:23:17.186 00:23:17.186 ' 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:17.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.186 --rc genhtml_branch_coverage=1 00:23:17.186 --rc genhtml_function_coverage=1 00:23:17.186 --rc genhtml_legend=1 00:23:17.186 --rc geninfo_all_blocks=1 00:23:17.186 --rc geninfo_unexecuted_blocks=1 00:23:17.186 00:23:17.186 ' 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:17.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.186 --rc genhtml_branch_coverage=1 00:23:17.186 --rc genhtml_function_coverage=1 00:23:17.186 --rc genhtml_legend=1 00:23:17.186 --rc geninfo_all_blocks=1 00:23:17.186 --rc geninfo_unexecuted_blocks=1 00:23:17.186 00:23:17.186 ' 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:17.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.186 --rc genhtml_branch_coverage=1 00:23:17.186 --rc genhtml_function_coverage=1 00:23:17.186 --rc genhtml_legend=1 00:23:17.186 --rc geninfo_all_blocks=1 00:23:17.186 --rc geninfo_unexecuted_blocks=1 00:23:17.186 00:23:17.186 ' 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.186 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.187 09:25:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.759 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:23.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:23.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:23.760 Found net devices under 0000:86:00.0: cvl_0_0 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:23.760 Found net devices under 0000:86:00.1: cvl_0_1 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:23:23.760 00:23:23.760 --- 10.0.0.2 ping statistics --- 00:23:23.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.760 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:23.760 00:23:23.760 --- 10.0.0.1 ping statistics --- 00:23:23.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.760 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1204278 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.760 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1204278 00:23:23.761 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1204278 ']' 00:23:23.761 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.761 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:23.761 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.761 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:23.761 09:25:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 [2024-11-19 09:25:23.953772] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:23:23.761 [2024-11-19 09:25:23.953815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.761 [2024-11-19 09:25:24.033756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.761 [2024-11-19 09:25:24.076131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.761 [2024-11-19 09:25:24.076169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.761 [2024-11-19 09:25:24.076176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.761 [2024-11-19 09:25:24.076182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.761 [2024-11-19 09:25:24.076188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.761 [2024-11-19 09:25:24.077631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.761 [2024-11-19 09:25:24.077648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.761 [2024-11-19 09:25:24.077742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.761 [2024-11-19 09:25:24.077743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:23.761 [2024-11-19 09:25:24.350604] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:23.761 Malloc1 00:23:23.761 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.020 09:25:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:24.020 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.278 [2024-11-19 09:25:25.252342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.278 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:24.537 09:25:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:24.795 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:24.795 fio-3.35 00:23:24.795 Starting 1 thread 00:23:27.328 [2024-11-19 09:25:28.121960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d04d0 is same with the state(6) to be set 00:23:27.328 [2024-11-19 09:25:28.122008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d04d0 is same with the state(6) to be set 00:23:27.328 00:23:27.328 test: (groupid=0, jobs=1): err= 0: pid=1204658: Tue Nov 19 09:25:28 2024 00:23:27.328 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec) 00:23:27.328 slat (nsec): min=1563, max=240213, avg=1721.53, stdev=2214.63 00:23:27.328 clat (usec): min=3103, max=10434, avg=6080.80, stdev=445.60 00:23:27.328 lat (usec): min=3133, max=10435, avg=6082.52, stdev=445.51 00:23:27.328 clat percentiles (usec): 00:23:27.328 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:23:27.328 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:23:27.328 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6783], 00:23:27.328 | 99.00th=[ 7046], 99.50th=[ 7111], 99.90th=[ 8717], 99.95th=[ 9110], 00:23:27.328 | 99.99th=[10159] 00:23:27.328 bw ( KiB/s): min=45944, max=47320, per=99.96%, avg=46640.00, stdev=564.25, samples=4 00:23:27.328 iops : min=11486, max=11830, avg=11660.00, stdev=141.06, samples=4 00:23:27.328 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(90.7MiB/2005msec); 0 zone resets 00:23:27.328 slat (nsec): min=1597, max=223295, avg=1777.79, stdev=1631.47 00:23:27.328 clat (usec): min=2427, max=9262, avg=4891.87, stdev=374.36 00:23:27.328 lat (usec): min=2442, max=9263, avg=4893.65, stdev=374.35 00:23:27.328 clat percentiles (usec): 00:23:27.328 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:23:27.328 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:23:27.328 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:23:27.328 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 7767], 99.95th=[ 8848], 00:23:27.328 | 99.99th=[ 9241] 00:23:27.328 bw ( KiB/s): min=46016, max=46720, per=99.99%, avg=46322.00, stdev=311.51, samples=4 00:23:27.328 iops : min=11504, max=11680, avg=11580.50, stdev=77.88, samples=4 00:23:27.328 lat (msec) : 4=0.41%, 10=99.58%, 20=0.01% 00:23:27.328 cpu : usr=73.85%, sys=25.10%, ctx=110, majf=0, minf=3 00:23:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:27.328 issued rwts: total=23387,23221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:27.328 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:27.328 00:23:27.328 Run status group 0 (all jobs): 00:23:27.328 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.8MB), run=2005-2005msec 00:23:27.328 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=90.7MiB (95.1MB), run=2005-2005msec 00:23:27.328 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:27.329 09:25:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:27.587 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:27.587 fio-3.35 00:23:27.587 Starting 1 thread 00:23:30.121 00:23:30.121 test: (groupid=0, jobs=1): err= 0: pid=1205220: Tue Nov 19 09:25:30 2024 00:23:30.121 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2005msec) 00:23:30.121 slat (usec): min=2, max=100, avg= 2.89, stdev= 1.67 00:23:30.121 clat (usec): min=1553, max=13101, avg=6790.53, stdev=1607.76 00:23:30.121 lat (usec): min=1555, max=13115, avg=6793.42, stdev=1607.90 00:23:30.121 clat percentiles (usec): 00:23:30.121 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:23:30.121 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7177], 00:23:30.121 | 70.00th=[ 7570], 80.00th=[ 8094], 90.00th=[ 8848], 95.00th=[ 9503], 00:23:30.121 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12387], 99.95th=[12649], 00:23:30.121 | 99.99th=[13042] 00:23:30.121 bw ( KiB/s): min=83584, max=94208, per=50.54%, avg=87240.00, stdev=4810.97, samples=4 00:23:30.121 iops : min= 5224, max= 5888, avg=5452.50, stdev=300.69, samples=4 00:23:30.121 write: IOPS=6303, BW=98.5MiB/s (103MB/s)(179MiB/1814msec); 0 zone resets 00:23:30.121 slat (usec): min=29, max=419, avg=32.47, stdev= 7.71 00:23:30.121 clat (usec): min=3393, max=14658, avg=8755.59, stdev=1509.62 00:23:30.121 lat (usec): min=3424, max=14769, avg=8788.06, stdev=1510.97 00:23:30.121 clat percentiles (usec): 00:23:30.121 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7439], 00:23:30.121 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:23:30.121 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10814], 95.00th=[11469], 00:23:30.121 | 99.00th=[12911], 99.50th=[13698], 99.90th=[14353], 99.95th=[14484], 00:23:30.121 | 99.99th=[14615] 00:23:30.121 bw ( KiB/s): min=87840, max=98304, per=90.30%, avg=91072.00, stdev=4857.61, samples=4 00:23:30.121 iops : min= 5490, max= 6144, avg=5692.00, stdev=303.60, samples=4 00:23:30.121 lat (msec) : 2=0.06%, 4=1.65%, 10=89.49%, 20=8.80% 00:23:30.121 cpu : usr=80.54%, sys=15.37%, ctx=197, majf=0, minf=3 00:23:30.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:30.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.122 issued rwts: total=21632,11434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.122 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.122 00:23:30.122 Run status group 0 (all jobs): 00:23:30.122 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (354MB), run=2005-2005msec 00:23:30.122 WRITE: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=179MiB (187MB), run=1814-1814msec 00:23:30.122 09:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.122 09:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:30.122 09:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:30.122 09:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:30.122 09:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:30.122 09:25:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.122 rmmod nvme_tcp 00:23:30.122 rmmod nvme_fabrics 00:23:30.122 rmmod nvme_keyring 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1204278 ']' 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1204278 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1204278 ']' 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1204278 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1204278 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1204278' 00:23:30.122 killing process with pid 1204278 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1204278 00:23:30.122 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1204278 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.381 09:25:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.919 00:23:32.919 real 0m15.624s 00:23:32.919 user 0m44.969s 00:23:32.919 sys 0m6.420s 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.919 ************************************ 00:23:32.919 END TEST nvmf_fio_host 00:23:32.919 ************************************ 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.919 ************************************ 00:23:32.919 START TEST nvmf_failover 00:23:32.919 ************************************ 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:32.919 * Looking for test storage... 00:23:32.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:32.919 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.920 --rc genhtml_branch_coverage=1 00:23:32.920 --rc genhtml_function_coverage=1 00:23:32.920 --rc genhtml_legend=1 00:23:32.920 --rc geninfo_all_blocks=1 00:23:32.920 --rc geninfo_unexecuted_blocks=1 00:23:32.920 00:23:32.920 ' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.920 --rc genhtml_branch_coverage=1 00:23:32.920 --rc genhtml_function_coverage=1 00:23:32.920 --rc genhtml_legend=1 00:23:32.920 --rc geninfo_all_blocks=1 00:23:32.920 --rc geninfo_unexecuted_blocks=1 00:23:32.920 00:23:32.920 ' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.920 --rc genhtml_branch_coverage=1 00:23:32.920 --rc genhtml_function_coverage=1 00:23:32.920 --rc genhtml_legend=1 00:23:32.920 --rc geninfo_all_blocks=1 00:23:32.920 --rc geninfo_unexecuted_blocks=1 00:23:32.920 00:23:32.920 ' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:32.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.920 --rc genhtml_branch_coverage=1 00:23:32.920 --rc genhtml_function_coverage=1 00:23:32.920 --rc genhtml_legend=1 00:23:32.920 --rc geninfo_all_blocks=1 00:23:32.920 --rc geninfo_unexecuted_blocks=1 00:23:32.920 00:23:32.920 ' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.920 09:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.497 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.497 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.497 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.497 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.497 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:23:39.498 00:23:39.498 --- 10.0.0.2 ping statistics --- 00:23:39.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.498 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:23:39.498 00:23:39.498 --- 10.0.0.1 ping statistics --- 00:23:39.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.498 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1209182 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1209182 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1209182 ']' 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.498 [2024-11-19 09:25:39.645555] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:23:39.498 [2024-11-19 09:25:39.645606] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.498 [2024-11-19 09:25:39.726278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:39.498 [2024-11-19 09:25:39.769301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.498 [2024-11-19 09:25:39.769338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.498 [2024-11-19 09:25:39.769345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.498 [2024-11-19 09:25:39.769351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.498 [2024-11-19 09:25:39.769356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.498 [2024-11-19 09:25:39.770817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.498 [2024-11-19 09:25:39.770923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.498 [2024-11-19 09:25:39.770925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.498 09:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:39.498 [2024-11-19 09:25:40.075274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.498 09:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:39.498 Malloc0 00:23:39.498 09:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.498 09:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.758 09:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.016 [2024-11-19 09:25:40.903659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.017 09:25:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.276 [2024-11-19 09:25:41.108248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.276 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:40.276 [2024-11-19 09:25:41.304897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1209457 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1209457 /var/tmp/bdevperf.sock 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1209457 ']' 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:40.535 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:40.794 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:40.794 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:40.794 09:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.053 NVMe0n1 00:23:41.053 09:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:41.622 00:23:41.622 09:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.622 09:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1209686 00:23:41.622 09:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:42.560 09:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.560 [2024-11-19 09:25:43.611130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.560 [2024-11-19 09:25:43.611625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.561 [2024-11-19 09:25:43.611733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a3d0 is same with the state(6) to be set 00:23:42.820 09:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:46.111 09:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:46.111 00:23:46.111 09:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.111 [2024-11-19 09:25:47.117514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.111 [2024-11-19 09:25:47.117939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 [2024-11-19 09:25:47.117990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b220 is same with the state(6) to be set 00:23:46.112 09:25:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:49.401 09:25:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.401 [2024-11-19 09:25:50.332089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.401 09:25:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:50.338 09:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:50.597 09:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1209686 00:23:57.173 { 00:23:57.173 "results": [ 00:23:57.173 { 00:23:57.173 "job": "NVMe0n1", 00:23:57.173 "core_mask": "0x1", 00:23:57.173 "workload": "verify", 00:23:57.173 "status": "finished", 00:23:57.173 "verify_range": { 00:23:57.173 "start": 0, 00:23:57.173 "length": 16384 00:23:57.173 }, 00:23:57.173 "queue_depth": 128, 00:23:57.173 "io_size": 4096, 00:23:57.173 "runtime": 15.004313, 00:23:57.173 "iops": 11006.301987968392, 00:23:57.173 "mibps": 42.99336714050153, 00:23:57.173 "io_failed": 7861, 00:23:57.173 "io_timeout": 0, 00:23:57.173 "avg_latency_us": 11078.67334747902, 00:23:57.173 "min_latency_us": 438.09391304347827, 00:23:57.173 "max_latency_us": 24162.838260869565 00:23:57.173 } 00:23:57.173 ], 00:23:57.173 "core_count": 1 00:23:57.173 } 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1209457 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1209457 ']' 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1209457 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1209457 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1209457' 00:23:57.173 killing process with pid 1209457 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1209457 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1209457 00:23:57.173 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:57.173 [2024-11-19 09:25:41.381405] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:23:57.173 [2024-11-19 09:25:41.381456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1209457 ] 00:23:57.173 [2024-11-19 09:25:41.457280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.173 [2024-11-19 09:25:41.498826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.173 Running I/O for 15 seconds... 00:23:57.173 10962.00 IOPS, 42.82 MiB/s [2024-11-19T08:25:58.232Z] [2024-11-19 09:25:43.613092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.173 [2024-11-19 09:25:43.613485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.173 [2024-11-19 09:25:43.613493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.174 [2024-11-19 09:25:43.613603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.174 [2024-11-19 09:25:43.613618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.174 [2024-11-19 09:25:43.613632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.613990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.613997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.614005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.614012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.614020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.614027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.614035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.614041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.614052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.614060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.614068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.174 [2024-11-19 09:25:43.614075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-11-19 09:25:43.614083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.175 [2024-11-19 09:25:43.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.175 [2024-11-19 09:25:43.614650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.175 [2024-11-19 09:25:43.614657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.176 [2024-11-19 09:25:43.614786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98448 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98456 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98464 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98472 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98480 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98488 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98496 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.614975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.614980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.614986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98504 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.614994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.615000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.615005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.615010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98512 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.615016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.615023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.615028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.615033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98520 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.615040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.615046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.615051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.615056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98528 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.615062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.615069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.615074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.615079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98536 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.615085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.615092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.615097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.615102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98544 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.615108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.615115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.615119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.615125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98552 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.615131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.628495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.628507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.628516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98560 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.628526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.628535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.176 [2024-11-19 09:25:43.628541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.176 [2024-11-19 09:25:43.628551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98568 len:8 PRP1 0x0 PRP2 0x0 00:23:57.176 [2024-11-19 09:25:43.628560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.628616] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:57.176 [2024-11-19 09:25:43.628644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.176 [2024-11-19 09:25:43.628657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.628669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.176 [2024-11-19 09:25:43.628679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.628690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.176 [2024-11-19 09:25:43.628701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.176 [2024-11-19 09:25:43.628712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.177 [2024-11-19 09:25:43.628724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:43.628734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:57.177 [2024-11-19 09:25:43.628766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e340 (9): Bad file descriptor 00:23:57.177 [2024-11-19 09:25:43.632659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:57.177 [2024-11-19 09:25:43.661679] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:57.177 10852.00 IOPS, 42.39 MiB/s [2024-11-19T08:25:58.236Z] 10916.67 IOPS, 42.64 MiB/s [2024-11-19T08:25:58.236Z] 10962.50 IOPS, 42.82 MiB/s [2024-11-19T08:25:58.236Z] [2024-11-19 09:25:47.119610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.119991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.119998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.177 [2024-11-19 09:25:47.120096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.177 [2024-11-19 09:25:47.120103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.178 [2024-11-19 09:25:47.120119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.178 [2024-11-19 09:25:47.120133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.178 [2024-11-19 09:25:47.120149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.178 [2024-11-19 09:25:47.120163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.178 [2024-11-19 09:25:47.120177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.178 [2024-11-19 09:25:47.120193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.178 [2024-11-19 09:25:47.120690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.178 [2024-11-19 09:25:47.120697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.120991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.120997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.179 [2024-11-19 09:25:47.121026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.179 [2024-11-19 09:25:47.121284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.179 [2024-11-19 09:25:47.121292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:47.121495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.180 [2024-11-19 09:25:47.121522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:8 PRP1 0x0 PRP2 0x0 00:23:57.180 [2024-11-19 09:25:47.121529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.180 [2024-11-19 09:25:47.121544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.180 [2024-11-19 09:25:47.121550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23720 len:8 PRP1 0x0 PRP2 0x0 00:23:57.180 [2024-11-19 09:25:47.121557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.180 [2024-11-19 09:25:47.121569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.180 [2024-11-19 09:25:47.121575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23728 len:8 PRP1 0x0 PRP2 0x0 00:23:57.180 [2024-11-19 09:25:47.121581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.180 [2024-11-19 09:25:47.121595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.180 [2024-11-19 09:25:47.121600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23736 len:8 PRP1 0x0 PRP2 0x0 00:23:57.180 [2024-11-19 09:25:47.121607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.180 [2024-11-19 09:25:47.121618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.180 [2024-11-19 09:25:47.121625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:8 PRP1 0x0 PRP2 0x0 00:23:57.180 [2024-11-19 09:25:47.121631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121674] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:57.180 [2024-11-19 09:25:47.121699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.180 [2024-11-19 09:25:47.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.180 [2024-11-19 09:25:47.121722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.180 [2024-11-19 09:25:47.121736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.180 [2024-11-19 09:25:47.121750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:47.121756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:57.180 [2024-11-19 09:25:47.121788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e340 (9): Bad file descriptor 00:23:57.180 [2024-11-19 09:25:47.124599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:57.180 [2024-11-19 09:25:47.186908] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:57.180 10818.40 IOPS, 42.26 MiB/s [2024-11-19T08:25:58.239Z] 10904.83 IOPS, 42.60 MiB/s [2024-11-19T08:25:58.239Z] 10923.43 IOPS, 42.67 MiB/s [2024-11-19T08:25:58.239Z] 10953.62 IOPS, 42.79 MiB/s [2024-11-19T08:25:58.239Z] 10967.78 IOPS, 42.84 MiB/s [2024-11-19T08:25:58.239Z] [2024-11-19 09:25:51.547880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:51.547920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.547935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.547942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.547958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.547966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.547974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.547981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.547990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.547997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.548006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.548013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.548021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.548033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.548042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.180 [2024-11-19 09:25:51.548049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.548057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:51.548065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.548075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:51.548081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.180 [2024-11-19 09:25:51.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.180 [2024-11-19 09:25:51.548097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.181 [2024-11-19 09:25:51.548310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.181 [2024-11-19 09:25:51.548705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.181 [2024-11-19 09:25:51.548713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.548984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.548993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.182 [2024-11-19 09:25:51.549153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.182 [2024-11-19 09:25:51.549232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.182 [2024-11-19 09:25:51.549240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:57.183 [2024-11-19 09:25:51.549789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.183 [2024-11-19 09:25:51.549853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.183 [2024-11-19 09:25:51.549862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-11-19 09:25:51.549868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.549876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-11-19 09:25:51.549882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.549891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.184 [2024-11-19 09:25:51.549897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.549905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ba0c0 is same with the state(6) to be set 00:23:57.184 [2024-11-19 09:25:51.549914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.184 [2024-11-19 09:25:51.549920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.184 [2024-11-19 09:25:51.549926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42696 len:8 PRP1 0x0 PRP2 0x0 00:23:57.184 [2024-11-19 09:25:51.549933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.549982] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:57.184 [2024-11-19 09:25:51.550007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.184 [2024-11-19 09:25:51.550015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.550022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.184 [2024-11-19 09:25:51.550029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.550036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.184 [2024-11-19 09:25:51.550043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.550052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.184 [2024-11-19 09:25:51.550059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.184 [2024-11-19 09:25:51.550066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:57.184 [2024-11-19 09:25:51.552923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:57.184 [2024-11-19 09:25:51.552956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65e340 (9): Bad file descriptor 00:23:57.184 [2024-11-19 09:25:51.619132] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:57.184 10901.60 IOPS, 42.58 MiB/s [2024-11-19T08:25:58.243Z] 10926.00 IOPS, 42.68 MiB/s [2024-11-19T08:25:58.243Z] 10950.67 IOPS, 42.78 MiB/s [2024-11-19T08:25:58.243Z] 10984.85 IOPS, 42.91 MiB/s [2024-11-19T08:25:58.243Z] 10998.57 IOPS, 42.96 MiB/s 00:23:57.184 Latency(us) 00:23:57.184 [2024-11-19T08:25:58.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.184 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:57.184 Verification LBA range: start 0x0 length 0x4000 00:23:57.184 NVMe0n1 : 15.00 11006.30 42.99 523.92 0.00 11078.67 438.09 24162.84 00:23:57.184 [2024-11-19T08:25:58.243Z] =================================================================================================================== 00:23:57.184 [2024-11-19T08:25:58.243Z] Total : 11006.30 42.99 523.92 0.00 11078.67 438.09 24162.84 00:23:57.184 Received shutdown signal, test time was about 15.000000 seconds 00:23:57.184 00:23:57.184 Latency(us) 00:23:57.184 [2024-11-19T08:25:58.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.184 [2024-11-19T08:25:58.243Z] =================================================================================================================== 00:23:57.184 [2024-11-19T08:25:58.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1212024 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1212024 /var/tmp/bdevperf.sock 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1212024 ']' 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:57.184 09:25:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:57.184 09:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.184 09:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:57.184 09:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:57.442 [2024-11-19 09:25:58.220034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.442 09:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:57.442 [2024-11-19 09:25:58.420625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:57.442 09:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.008 NVMe0n1 00:23:58.008 09:25:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.265 00:23:58.265 09:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:58.522 00:23:58.522 09:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:58.522 09:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.780 09:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:58.780 09:25:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:02.068 09:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.068 09:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:02.068 09:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1212914 00:24:02.068 09:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:02.068 09:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1212914 00:24:03.444 { 00:24:03.444 "results": [ 00:24:03.444 { 00:24:03.444 "job": "NVMe0n1", 00:24:03.444 "core_mask": "0x1", 00:24:03.444 "workload": "verify", 00:24:03.444 "status": "finished", 00:24:03.444 "verify_range": { 00:24:03.444 "start": 0, 00:24:03.444 "length": 16384 00:24:03.444 }, 00:24:03.444 "queue_depth": 128, 00:24:03.444 "io_size": 4096, 00:24:03.444 "runtime": 1.043583, 00:24:03.444 "iops": 10495.571507009983, 00:24:03.444 "mibps": 40.99832619925775, 00:24:03.444 "io_failed": 0, 00:24:03.444 "io_timeout": 0, 00:24:03.444 "avg_latency_us": 11686.90083399823, 00:24:03.444 "min_latency_us": 2393.488695652174, 00:24:03.444 "max_latency_us": 43766.65043478261 00:24:03.444 } 00:24:03.444 ], 00:24:03.444 "core_count": 1 00:24:03.444 } 00:24:03.444 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:03.444 [2024-11-19 09:25:57.830856] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:24:03.444 [2024-11-19 09:25:57.830909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212024 ] 00:24:03.444 [2024-11-19 09:25:57.907974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.444 [2024-11-19 09:25:57.946104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.444 [2024-11-19 09:25:59.798601] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:03.444 [2024-11-19 09:25:59.798649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-11-19 09:25:59.798662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-11-19 09:25:59.798671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-11-19 09:25:59.798678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-11-19 09:25:59.798686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-11-19 09:25:59.798693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-11-19 09:25:59.798700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.444 [2024-11-19 09:25:59.798707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.444 [2024-11-19 09:25:59.798713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:03.444 [2024-11-19 09:25:59.798738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:03.444 [2024-11-19 09:25:59.798754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1463340 (9): Bad file descriptor 00:24:03.444 [2024-11-19 09:25:59.809379] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:03.444 Running I/O for 1 seconds... 00:24:03.444 10825.00 IOPS, 42.29 MiB/s 00:24:03.444 Latency(us) 00:24:03.444 [2024-11-19T08:26:04.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.444 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:03.444 Verification LBA range: start 0x0 length 0x4000 00:24:03.444 NVMe0n1 : 1.04 10495.57 41.00 0.00 0.00 11686.90 2393.49 43766.65 00:24:03.444 [2024-11-19T08:26:04.503Z] =================================================================================================================== 00:24:03.444 [2024-11-19T08:26:04.503Z] Total : 10495.57 41.00 0.00 0.00 11686.90 2393.49 43766.65 00:24:03.444 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.444 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:03.445 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.703 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:03.703 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:03.962 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.962 09:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:07.248 09:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.248 09:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1212024 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1212024 ']' 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1212024 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1212024 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1212024' 00:24:07.248 killing process with pid 1212024 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1212024 00:24:07.248 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1212024 00:24:07.507 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:07.507 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.766 rmmod nvme_tcp 00:24:07.766 rmmod nvme_fabrics 00:24:07.766 rmmod nvme_keyring 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1209182 ']' 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1209182 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1209182 ']' 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1209182 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1209182 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:07.766 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:07.767 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1209182' 00:24:07.767 killing process with pid 1209182 00:24:07.767 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1209182 00:24:07.767 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1209182 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.026 09:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.934 09:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.934 00:24:09.934 real 0m37.533s 00:24:09.934 user 1m58.850s 00:24:09.934 sys 0m8.028s 00:24:09.934 09:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:09.934 09:26:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:09.934 ************************************ 00:24:09.934 END TEST nvmf_failover 00:24:09.934 ************************************ 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.194 ************************************ 00:24:10.194 START TEST nvmf_host_discovery 00:24:10.194 ************************************ 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:10.194 * Looking for test storage... 00:24:10.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.194 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:10.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.195 --rc genhtml_branch_coverage=1 00:24:10.195 --rc genhtml_function_coverage=1 00:24:10.195 --rc genhtml_legend=1 00:24:10.195 --rc geninfo_all_blocks=1 00:24:10.195 --rc geninfo_unexecuted_blocks=1 00:24:10.195 00:24:10.195 ' 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:10.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.195 --rc genhtml_branch_coverage=1 00:24:10.195 --rc genhtml_function_coverage=1 00:24:10.195 --rc genhtml_legend=1 00:24:10.195 --rc geninfo_all_blocks=1 00:24:10.195 --rc geninfo_unexecuted_blocks=1 00:24:10.195 00:24:10.195 ' 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:10.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.195 --rc genhtml_branch_coverage=1 00:24:10.195 --rc genhtml_function_coverage=1 00:24:10.195 --rc genhtml_legend=1 00:24:10.195 --rc geninfo_all_blocks=1 00:24:10.195 --rc geninfo_unexecuted_blocks=1 00:24:10.195 00:24:10.195 ' 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:10.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.195 --rc genhtml_branch_coverage=1 00:24:10.195 --rc genhtml_function_coverage=1 00:24:10.195 --rc genhtml_legend=1 00:24:10.195 --rc geninfo_all_blocks=1 00:24:10.195 --rc geninfo_unexecuted_blocks=1 00:24:10.195 00:24:10.195 ' 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.195 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.454 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.455 09:26:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.030 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:17.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:17.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:17.031 Found net devices under 0000:86:00.0: cvl_0_0 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:17.031 Found net devices under 0000:86:00.1: cvl_0_1 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.031 09:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:24:17.031 00:24:17.031 --- 10.0.0.2 ping statistics --- 00:24:17.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.031 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:24:17.031 00:24:17.031 --- 10.0.0.1 ping statistics --- 00:24:17.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.031 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.031 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1217365 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1217365 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1217365 ']' 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 [2024-11-19 09:26:17.220830] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:24:17.032 [2024-11-19 09:26:17.220884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.032 [2024-11-19 09:26:17.301868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.032 [2024-11-19 09:26:17.343013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.032 [2024-11-19 09:26:17.343051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.032 [2024-11-19 09:26:17.343058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.032 [2024-11-19 09:26:17.343064] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.032 [2024-11-19 09:26:17.343069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.032 [2024-11-19 09:26:17.343630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 [2024-11-19 09:26:17.478830] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 [2024-11-19 09:26:17.490982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 null0 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 null1 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1217385 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1217385 /tmp/host.sock 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1217385 ']' 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:17.032 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 [2024-11-19 09:26:17.568256] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:24:17.032 [2024-11-19 09:26:17.568297] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217385 ] 00:24:17.032 [2024-11-19 09:26:17.643207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.032 [2024-11-19 09:26:17.686688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.032 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.033 09:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.033 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.292 [2024-11-19 09:26:18.116593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:17.292 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:24:17.293 09:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:17.860 [2024-11-19 09:26:18.806403] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:17.860 [2024-11-19 09:26:18.806423] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:17.860 [2024-11-19 09:26:18.806436] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:17.860 [2024-11-19 09:26:18.894705] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:18.119 [2024-11-19 09:26:18.956357] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:18.119 [2024-11-19 09:26:18.957143] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2129ed0:1 started. 00:24:18.119 [2024-11-19 09:26:18.958541] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:18.119 [2024-11-19 09:26:18.958556] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:18.119 [2024-11-19 09:26:18.965356] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2129ed0 was disconnected and freed. delete nvme_qpair. 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:18.378 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.637 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.896 [2024-11-19 09:26:19.706693] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x212a2a0:1 started. 00:24:18.896 [2024-11-19 09:26:19.717188] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x212a2a0 was disconnected and freed. delete nvme_qpair. 00:24:18.896 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.897 [2024-11-19 09:26:19.789103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:18.897 [2024-11-19 09:26:19.790071] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:18.897 [2024-11-19 09:26:19.790091] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.897 [2024-11-19 09:26:19.876680] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:18.897 09:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:24:19.156 [2024-11-19 09:26:19.982405] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:19.156 [2024-11-19 09:26:19.982439] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:19.156 [2024-11-19 09:26:19.982447] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:19.156 [2024-11-19 09:26:19.982452] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 09:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 [2024-11-19 09:26:21.037578] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:20.095 [2024-11-19 09:26:21.037605] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:20.095 [2024-11-19 09:26:21.041123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.095 [2024-11-19 09:26:21.041145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.095 [2024-11-19 09:26:21.041155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.095 [2024-11-19 09:26:21.041162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.095 [2024-11-19 09:26:21.041170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.095 [2024-11-19 09:26:21.041178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.095 [2024-11-19 09:26:21.041186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.095 [2024-11-19 09:26:21.041193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.095 [2024-11-19 09:26:21.041200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.095 [2024-11-19 09:26:21.051132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.095 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.095 [2024-11-19 09:26:21.061171] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.095 [2024-11-19 09:26:21.061184] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.095 [2024-11-19 09:26:21.061189] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.095 [2024-11-19 09:26:21.061195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.095 [2024-11-19 09:26:21.061219] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.095 [2024-11-19 09:26:21.061402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.095 [2024-11-19 09:26:21.061417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.095 [2024-11-19 09:26:21.061426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.095 [2024-11-19 09:26:21.061439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.095 [2024-11-19 09:26:21.061451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.095 [2024-11-19 09:26:21.061458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.095 [2024-11-19 09:26:21.061467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.095 [2024-11-19 09:26:21.061474] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.095 [2024-11-19 09:26:21.061479] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.095 [2024-11-19 09:26:21.061483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.095 [2024-11-19 09:26:21.071249] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.095 [2024-11-19 09:26:21.071260] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.095 [2024-11-19 09:26:21.071265] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.095 [2024-11-19 09:26:21.071269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.095 [2024-11-19 09:26:21.071284] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.095 [2024-11-19 09:26:21.071395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.095 [2024-11-19 09:26:21.071408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.095 [2024-11-19 09:26:21.071416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.095 [2024-11-19 09:26:21.071426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.095 [2024-11-19 09:26:21.071436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.095 [2024-11-19 09:26:21.071442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.095 [2024-11-19 09:26:21.071449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.095 [2024-11-19 09:26:21.071455] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.095 [2024-11-19 09:26:21.071460] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.095 [2024-11-19 09:26:21.071464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.095 [2024-11-19 09:26:21.081315] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.095 [2024-11-19 09:26:21.081329] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.095 [2024-11-19 09:26:21.081333] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.081341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.096 [2024-11-19 09:26:21.081357] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.081540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.096 [2024-11-19 09:26:21.081554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.096 [2024-11-19 09:26:21.081562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.096 [2024-11-19 09:26:21.081573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.096 [2024-11-19 09:26:21.081583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.096 [2024-11-19 09:26:21.081590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.096 [2024-11-19 09:26:21.081597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.096 [2024-11-19 09:26:21.081603] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.096 [2024-11-19 09:26:21.081608] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.096 [2024-11-19 09:26:21.081612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.096 [2024-11-19 09:26:21.091389] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.096 [2024-11-19 09:26:21.091403] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.096 [2024-11-19 09:26:21.091407] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.091411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.096 [2024-11-19 09:26:21.091426] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.096 [2024-11-19 09:26:21.091528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.096 [2024-11-19 09:26:21.091542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.096 [2024-11-19 09:26:21.091549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.096 [2024-11-19 09:26:21.091560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.096 [2024-11-19 09:26:21.091577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.096 [2024-11-19 09:26:21.091584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.096 [2024-11-19 09:26:21.091591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.096 [2024-11-19 09:26:21.091597] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.096 [2024-11-19 09:26:21.091602] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.096 [2024-11-19 09:26:21.091606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.096 [2024-11-19 09:26:21.101457] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.096 [2024-11-19 09:26:21.101472] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.096 [2024-11-19 09:26:21.101477] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.101480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.096 [2024-11-19 09:26:21.101495] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.101789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.096 [2024-11-19 09:26:21.101803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.096 [2024-11-19 09:26:21.101813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.096 [2024-11-19 09:26:21.101825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.096 [2024-11-19 09:26:21.101836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.096 [2024-11-19 09:26:21.101843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.096 [2024-11-19 09:26:21.101851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.096 [2024-11-19 09:26:21.101857] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.096 [2024-11-19 09:26:21.101861] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.096 [2024-11-19 09:26:21.101865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.096 [2024-11-19 09:26:21.111526] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.096 [2024-11-19 09:26:21.111538] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.096 [2024-11-19 09:26:21.111542] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.111546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.096 [2024-11-19 09:26:21.111561] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.111784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.096 [2024-11-19 09:26:21.111797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.096 [2024-11-19 09:26:21.111806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.096 [2024-11-19 09:26:21.111816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.096 [2024-11-19 09:26:21.111834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.096 [2024-11-19 09:26:21.111842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.096 [2024-11-19 09:26:21.111849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.096 [2024-11-19 09:26:21.111855] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.096 [2024-11-19 09:26:21.111859] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.096 [2024-11-19 09:26:21.111863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.096 [2024-11-19 09:26:21.121592] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:20.096 [2024-11-19 09:26:21.121603] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:20.096 [2024-11-19 09:26:21.121607] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.121611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:20.096 [2024-11-19 09:26:21.121624] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:20.096 [2024-11-19 09:26:21.121784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:20.096 [2024-11-19 09:26:21.121796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fa490 with addr=10.0.0.2, port=4420 00:24:20.096 [2024-11-19 09:26:21.121803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fa490 is same with the state(6) to be set 00:24:20.096 [2024-11-19 09:26:21.121813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa490 (9): Bad file descriptor 00:24:20.096 [2024-11-19 09:26:21.121824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:20.096 [2024-11-19 09:26:21.121831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:20.096 [2024-11-19 09:26:21.121838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:20.096 [2024-11-19 09:26:21.121844] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:20.096 [2024-11-19 09:26:21.121848] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:20.096 [2024-11-19 09:26:21.121852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:20.096 [2024-11-19 09:26:21.123342] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:20.096 [2024-11-19 09:26:21.123359] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.096 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:20.097 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:20.356 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.357 09:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.735 [2024-11-19 09:26:22.436420] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.735 [2024-11-19 09:26:22.436438] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.735 [2024-11-19 09:26:22.436449] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.735 [2024-11-19 09:26:22.522708] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:21.994 [2024-11-19 09:26:22.826012] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:21.994 [2024-11-19 09:26:22.826654] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x210b9d0:1 started. 00:24:21.994 [2024-11-19 09:26:22.828296] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:21.994 [2024-11-19 09:26:22.828320] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.994 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.994 [2024-11-19 09:26:22.835581] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x210b9d0 was disconnected and freed. delete nvme_qpair. 00:24:21.994 request: 00:24:21.994 { 00:24:21.994 "name": "nvme", 00:24:21.994 "trtype": "tcp", 00:24:21.994 "traddr": "10.0.0.2", 00:24:21.995 "adrfam": "ipv4", 00:24:21.995 "trsvcid": "8009", 00:24:21.995 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:21.995 "wait_for_attach": true, 00:24:21.995 "method": "bdev_nvme_start_discovery", 00:24:21.995 "req_id": 1 00:24:21.995 } 00:24:21.995 Got JSON-RPC error response 00:24:21.995 response: 00:24:21.995 { 00:24:21.995 "code": -17, 00:24:21.995 "message": "File exists" 00:24:21.995 } 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.995 request: 00:24:21.995 { 00:24:21.995 "name": "nvme_second", 00:24:21.995 "trtype": "tcp", 00:24:21.995 "traddr": "10.0.0.2", 00:24:21.995 "adrfam": "ipv4", 00:24:21.995 "trsvcid": "8009", 00:24:21.995 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:21.995 "wait_for_attach": true, 00:24:21.995 "method": "bdev_nvme_start_discovery", 00:24:21.995 "req_id": 1 00:24:21.995 } 00:24:21.995 Got JSON-RPC error response 00:24:21.995 response: 00:24:21.995 { 00:24:21.995 "code": -17, 00:24:21.995 "message": "File exists" 00:24:21.995 } 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:21.995 09:26:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.995 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.254 09:26:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:23.242 [2024-11-19 09:26:24.072010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.242 [2024-11-19 09:26:24.072038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212ae90 with addr=10.0.0.2, port=8010 00:24:23.242 [2024-11-19 09:26:24.072052] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:23.242 [2024-11-19 09:26:24.072058] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:23.242 [2024-11-19 09:26:24.072065] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:24.177 [2024-11-19 09:26:25.074399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.177 [2024-11-19 09:26:25.074425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212ae90 with addr=10.0.0.2, port=8010 00:24:24.177 [2024-11-19 09:26:25.074437] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:24.177 [2024-11-19 09:26:25.074444] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:24.177 [2024-11-19 09:26:25.074451] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:25.211 [2024-11-19 09:26:26.076640] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:25.211 request: 00:24:25.211 { 00:24:25.211 "name": "nvme_second", 00:24:25.211 "trtype": "tcp", 00:24:25.211 "traddr": "10.0.0.2", 00:24:25.211 "adrfam": "ipv4", 00:24:25.211 "trsvcid": "8010", 00:24:25.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:25.211 "wait_for_attach": false, 00:24:25.211 "attach_timeout_ms": 3000, 00:24:25.211 "method": "bdev_nvme_start_discovery", 00:24:25.211 "req_id": 1 00:24:25.211 } 00:24:25.211 Got JSON-RPC error response 00:24:25.211 response: 00:24:25.211 { 00:24:25.211 "code": -110, 00:24:25.211 "message": "Connection timed out" 00:24:25.211 } 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1217385 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.211 rmmod nvme_tcp 00:24:25.211 rmmod nvme_fabrics 00:24:25.211 rmmod nvme_keyring 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1217365 ']' 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1217365 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 1217365 ']' 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 1217365 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:25.211 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1217365 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1217365' 00:24:25.525 killing process with pid 1217365 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 1217365 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 1217365 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.525 09:26:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.434 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.434 00:24:27.434 real 0m17.415s 00:24:27.434 user 0m20.869s 00:24:27.434 sys 0m5.909s 00:24:27.434 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:27.434 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.434 ************************************ 00:24:27.434 END TEST nvmf_host_discovery 00:24:27.434 ************************************ 00:24:27.694 09:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:27.694 09:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:27.694 09:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:27.694 09:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.694 ************************************ 00:24:27.694 START TEST nvmf_host_multipath_status 00:24:27.694 ************************************ 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:27.695 * Looking for test storage... 00:24:27.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.695 --rc genhtml_branch_coverage=1 00:24:27.695 --rc genhtml_function_coverage=1 00:24:27.695 --rc genhtml_legend=1 00:24:27.695 --rc geninfo_all_blocks=1 00:24:27.695 --rc geninfo_unexecuted_blocks=1 00:24:27.695 00:24:27.695 ' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.695 --rc genhtml_branch_coverage=1 00:24:27.695 --rc genhtml_function_coverage=1 00:24:27.695 --rc genhtml_legend=1 00:24:27.695 --rc geninfo_all_blocks=1 00:24:27.695 --rc geninfo_unexecuted_blocks=1 00:24:27.695 00:24:27.695 ' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.695 --rc genhtml_branch_coverage=1 00:24:27.695 --rc genhtml_function_coverage=1 00:24:27.695 --rc genhtml_legend=1 00:24:27.695 --rc geninfo_all_blocks=1 00:24:27.695 --rc geninfo_unexecuted_blocks=1 00:24:27.695 00:24:27.695 ' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:27.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.695 --rc genhtml_branch_coverage=1 00:24:27.695 --rc genhtml_function_coverage=1 00:24:27.695 --rc genhtml_legend=1 00:24:27.695 --rc geninfo_all_blocks=1 00:24:27.695 --rc geninfo_unexecuted_blocks=1 00:24:27.695 00:24:27.695 ' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.695 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.696 09:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:34.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:34.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:34.264 Found net devices under 0000:86:00.0: cvl_0_0 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:34.264 Found net devices under 0000:86:00.1: cvl_0_1 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.264 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:24:34.265 00:24:34.265 --- 10.0.0.2 ping statistics --- 00:24:34.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.265 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:34.265 00:24:34.265 --- 10.0.0.1 ping statistics --- 00:24:34.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.265 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1222475 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1222475 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1222475 ']' 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:34.265 09:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.265 [2024-11-19 09:26:34.720210] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:24:34.265 [2024-11-19 09:26:34.720254] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.265 [2024-11-19 09:26:34.799842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:34.265 [2024-11-19 09:26:34.843335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.265 [2024-11-19 09:26:34.843370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.265 [2024-11-19 09:26:34.843379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.265 [2024-11-19 09:26:34.843385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.265 [2024-11-19 09:26:34.843390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.265 [2024-11-19 09:26:34.844585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.265 [2024-11-19 09:26:34.844588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.525 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.525 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:24:34.525 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.525 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.525 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:34.784 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.784 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1222475 00:24:34.784 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:34.784 [2024-11-19 09:26:35.779769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.784 09:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:35.043 Malloc0 00:24:35.043 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:35.303 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:35.562 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.562 [2024-11-19 09:26:36.601415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.821 [2024-11-19 09:26:36.801963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1222857 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1222857 /var/tmp/bdevperf.sock 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1222857 ']' 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:35.821 09:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:36.080 09:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:36.080 09:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:24:36.080 09:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:36.339 09:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:36.598 Nvme0n1 00:24:36.598 09:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:37.167 Nvme0n1 00:24:37.167 09:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:37.167 09:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:39.072 09:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:39.072 09:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:39.331 09:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:39.589 09:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:40.526 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:40.526 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:40.526 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.526 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:40.786 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.786 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:40.786 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.786 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.045 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.045 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.045 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.045 09:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.304 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:41.563 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.563 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:41.563 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.563 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:41.822 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.822 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:41.822 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:42.081 09:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:42.340 09:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:43.277 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:43.277 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:43.277 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.277 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:43.537 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:43.537 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:43.537 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.537 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:43.796 09:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:44.055 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.055 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:44.055 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.055 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:44.314 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.314 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:44.314 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.314 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:44.573 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.573 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:44.573 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:44.833 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:44.833 09:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:46.211 09:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:46.211 09:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:46.211 09:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.211 09:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:46.211 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.211 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:46.211 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.211 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:46.471 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.731 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.731 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:46.731 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.731 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:46.990 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:46.990 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:46.990 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:46.990 09:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.249 09:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.249 09:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:47.249 09:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:47.508 09:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:47.767 09:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:48.703 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:48.703 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:48.703 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.703 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:48.963 09:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.222 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.222 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.222 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.222 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.480 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.481 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:49.481 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.481 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:49.740 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.740 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:49.740 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.740 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:49.999 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.999 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:49.999 09:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:50.258 09:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:50.258 09:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.637 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:51.897 09:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.156 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.156 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:52.156 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.156 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.415 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.415 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:52.415 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:52.415 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.674 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.674 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:52.674 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:52.674 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:52.938 09:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:53.874 09:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:53.874 09:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:53.874 09:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:53.874 09:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.133 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.133 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:54.133 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.133 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:54.393 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.393 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:54.393 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.393 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:54.652 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.652 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:54.652 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.652 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.911 09:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.170 09:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.170 09:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:55.430 09:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:55.430 09:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:55.689 09:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.948 09:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:56.886 09:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:56.886 09:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:56.886 09:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:56.886 09:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.145 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.145 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:57.145 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.145 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.405 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.405 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.405 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.405 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.664 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:57.923 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.923 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:57.923 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.923 09:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.182 09:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.182 09:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:58.182 09:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.440 09:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.699 09:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:59.637 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:59.637 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.637 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.637 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:59.895 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:59.895 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:59.895 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.895 09:27:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.153 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.153 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.153 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.153 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.419 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.679 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.679 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:00.679 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.679 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.937 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.937 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:00.937 09:27:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:01.196 09:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:01.196 09:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.575 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.833 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.834 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.834 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.834 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:03.093 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.093 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:03.093 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.093 09:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.093 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.093 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.093 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.093 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.352 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.352 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.352 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.352 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.611 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.611 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:03.612 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.871 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:04.129 09:27:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:05.067 09:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:05.067 09:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.067 09:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.067 09:27:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.326 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.326 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:05.326 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.326 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.585 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.585 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.585 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.585 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.585 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.586 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.586 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.586 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.845 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.845 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.845 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.845 09:27:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.104 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.104 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:06.104 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.104 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1222857 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1222857 ']' 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1222857 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1222857 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1222857' 00:25:06.363 killing process with pid 1222857 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1222857 00:25:06.363 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1222857 00:25:06.363 { 00:25:06.363 "results": [ 00:25:06.363 { 00:25:06.363 "job": "Nvme0n1", 00:25:06.363 "core_mask": "0x4", 00:25:06.363 "workload": "verify", 00:25:06.363 "status": "terminated", 00:25:06.363 "verify_range": { 00:25:06.363 "start": 0, 00:25:06.363 "length": 16384 00:25:06.363 }, 00:25:06.363 "queue_depth": 128, 00:25:06.363 "io_size": 4096, 00:25:06.363 "runtime": 29.077578, 00:25:06.363 "iops": 10380.885230537426, 00:25:06.363 "mibps": 40.55033293178682, 00:25:06.363 "io_failed": 0, 00:25:06.363 "io_timeout": 0, 00:25:06.363 "avg_latency_us": 12310.605912269126, 00:25:06.363 "min_latency_us": 869.064347826087, 00:25:06.363 "max_latency_us": 3019898.88 00:25:06.363 } 00:25:06.363 ], 00:25:06.363 "core_count": 1 00:25:06.363 } 00:25:06.625 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1222857 00:25:06.625 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.625 [2024-11-19 09:26:36.880105] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:25:06.625 [2024-11-19 09:26:36.880160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222857 ] 00:25:06.625 [2024-11-19 09:26:36.958155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.625 [2024-11-19 09:26:37.001301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.625 Running I/O for 90 seconds... 00:25:06.625 11258.00 IOPS, 43.98 MiB/s [2024-11-19T08:27:07.684Z] 11200.50 IOPS, 43.75 MiB/s [2024-11-19T08:27:07.684Z] 11205.33 IOPS, 43.77 MiB/s [2024-11-19T08:27:07.684Z] 11201.75 IOPS, 43.76 MiB/s [2024-11-19T08:27:07.684Z] 11209.80 IOPS, 43.79 MiB/s [2024-11-19T08:27:07.684Z] 11210.17 IOPS, 43.79 MiB/s [2024-11-19T08:27:07.684Z] 11204.00 IOPS, 43.77 MiB/s [2024-11-19T08:27:07.684Z] 11175.25 IOPS, 43.65 MiB/s [2024-11-19T08:27:07.684Z] 11180.11 IOPS, 43.67 MiB/s [2024-11-19T08:27:07.684Z] 11182.50 IOPS, 43.68 MiB/s [2024-11-19T08:27:07.684Z] 11195.64 IOPS, 43.73 MiB/s [2024-11-19T08:27:07.684Z] 11192.00 IOPS, 43.72 MiB/s [2024-11-19T08:27:07.684Z] [2024-11-19 09:26:51.041683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.625 [2024-11-19 09:26:51.041722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:06.625 [2024-11-19 09:26:51.041758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.625 [2024-11-19 09:26:51.041768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:06.625 [2024-11-19 09:26:51.041781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.625 [2024-11-19 09:26:51.041789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:06.625 [2024-11-19 09:26:51.041802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.625 [2024-11-19 09:26:51.041808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:06.625 [2024-11-19 09:26:51.041820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.625 [2024-11-19 09:26:51.041827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:06.625 [2024-11-19 09:26:51.041839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.041878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.041896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.041915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.041940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.041965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.041972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.042981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.042989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:06.626 [2024-11-19 09:26:51.043193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.626 [2024-11-19 09:26:51.043200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.627 [2024-11-19 09:26:51.043510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.627 [2024-11-19 09:26:51.043531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.627 [2024-11-19 09:26:51.043554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.627 [2024-11-19 09:26:51.043664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.043986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.043993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.044008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.627 [2024-11-19 09:26:51.044015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.627 [2024-11-19 09:26:51.044031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.628 [2024-11-19 09:26:51.044844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.628 [2024-11-19 09:26:51.044861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.044868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.044885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.044892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.044912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.044919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.044936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.044943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.044966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.044973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.044990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.044997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:26:51.045377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:26:51.045384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:06.629 11083.38 IOPS, 43.29 MiB/s [2024-11-19T08:27:07.688Z] 10291.71 IOPS, 40.20 MiB/s [2024-11-19T08:27:07.688Z] 9605.60 IOPS, 37.52 MiB/s [2024-11-19T08:27:07.688Z] 9094.25 IOPS, 35.52 MiB/s [2024-11-19T08:27:07.688Z] 9210.71 IOPS, 35.98 MiB/s [2024-11-19T08:27:07.688Z] 9317.50 IOPS, 36.40 MiB/s [2024-11-19T08:27:07.688Z] 9477.68 IOPS, 37.02 MiB/s [2024-11-19T08:27:07.688Z] 9667.75 IOPS, 37.76 MiB/s [2024-11-19T08:27:07.688Z] 9834.29 IOPS, 38.42 MiB/s [2024-11-19T08:27:07.688Z] 9910.82 IOPS, 38.71 MiB/s [2024-11-19T08:27:07.688Z] 9961.09 IOPS, 38.91 MiB/s [2024-11-19T08:27:07.688Z] 10007.46 IOPS, 39.09 MiB/s [2024-11-19T08:27:07.688Z] 10122.12 IOPS, 39.54 MiB/s [2024-11-19T08:27:07.688Z] 10234.35 IOPS, 39.98 MiB/s [2024-11-19T08:27:07.688Z] [2024-11-19 09:27:04.931918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.931961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.931996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.629 [2024-11-19 09:27:04.932151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:06.629 [2024-11-19 09:27:04.932163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:06.630 [2024-11-19 09:27:04.932709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.630 [2024-11-19 09:27:04.932716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.631 [2024-11-19 09:27:04.932874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.932887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.932894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:06.631 [2024-11-19 09:27:04.933648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.631 [2024-11-19 09:27:04.933655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:06.631 10320.33 IOPS, 40.31 MiB/s [2024-11-19T08:27:07.690Z] 10351.61 IOPS, 40.44 MiB/s [2024-11-19T08:27:07.690Z] 10381.59 IOPS, 40.55 MiB/s [2024-11-19T08:27:07.690Z] Received shutdown signal, test time was about 29.078248 seconds 00:25:06.631 00:25:06.632 Latency(us) 00:25:06.632 [2024-11-19T08:27:07.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.632 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.632 Verification LBA range: start 0x0 length 0x4000 00:25:06.632 Nvme0n1 : 29.08 10380.89 40.55 0.00 0.00 12310.61 869.06 3019898.88 00:25:06.632 [2024-11-19T08:27:07.691Z] =================================================================================================================== 00:25:06.632 [2024-11-19T08:27:07.691Z] Total : 10380.89 40.55 0.00 0.00 12310.61 869.06 3019898.88 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.632 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.632 rmmod nvme_tcp 00:25:06.632 rmmod nvme_fabrics 00:25:06.891 rmmod nvme_keyring 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1222475 ']' 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1222475 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1222475 ']' 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1222475 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1222475 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1222475' 00:25:06.891 killing process with pid 1222475 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1222475 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1222475 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.891 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.153 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.153 09:27:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.314 00:25:09.314 real 0m41.463s 00:25:09.314 user 1m52.422s 00:25:09.314 sys 0m11.597s 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:09.314 ************************************ 00:25:09.314 END TEST nvmf_host_multipath_status 00:25:09.314 ************************************ 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.314 ************************************ 00:25:09.314 START TEST nvmf_discovery_remove_ifc 00:25:09.314 ************************************ 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:09.314 * Looking for test storage... 00:25:09.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.314 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.315 --rc genhtml_branch_coverage=1 00:25:09.315 --rc genhtml_function_coverage=1 00:25:09.315 --rc genhtml_legend=1 00:25:09.315 --rc geninfo_all_blocks=1 00:25:09.315 --rc geninfo_unexecuted_blocks=1 00:25:09.315 00:25:09.315 ' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.315 --rc genhtml_branch_coverage=1 00:25:09.315 --rc genhtml_function_coverage=1 00:25:09.315 --rc genhtml_legend=1 00:25:09.315 --rc geninfo_all_blocks=1 00:25:09.315 --rc geninfo_unexecuted_blocks=1 00:25:09.315 00:25:09.315 ' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.315 --rc genhtml_branch_coverage=1 00:25:09.315 --rc genhtml_function_coverage=1 00:25:09.315 --rc genhtml_legend=1 00:25:09.315 --rc geninfo_all_blocks=1 00:25:09.315 --rc geninfo_unexecuted_blocks=1 00:25:09.315 00:25:09.315 ' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:09.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.315 --rc genhtml_branch_coverage=1 00:25:09.315 --rc genhtml_function_coverage=1 00:25:09.315 --rc genhtml_legend=1 00:25:09.315 --rc geninfo_all_blocks=1 00:25:09.315 --rc geninfo_unexecuted_blocks=1 00:25:09.315 00:25:09.315 ' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.315 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.316 09:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:15.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:15.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:15.891 Found net devices under 0000:86:00.0: cvl_0_0 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:15.891 Found net devices under 0000:86:00.1: cvl_0_1 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.891 09:27:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.891 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:25:15.891 00:25:15.891 --- 10.0.0.2 ping statistics --- 00:25:15.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.892 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:25:15.892 00:25:15.892 --- 10.0.0.1 ping statistics --- 00:25:15.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.892 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1232029 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1232029 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1232029 ']' 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 [2024-11-19 09:27:16.262957] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:25:15.892 [2024-11-19 09:27:16.263001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.892 [2024-11-19 09:27:16.342030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.892 [2024-11-19 09:27:16.383177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.892 [2024-11-19 09:27:16.383210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.892 [2024-11-19 09:27:16.383218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.892 [2024-11-19 09:27:16.383224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.892 [2024-11-19 09:27:16.383229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.892 [2024-11-19 09:27:16.383790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 [2024-11-19 09:27:16.523334] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.892 [2024-11-19 09:27:16.531495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:15.892 null0 00:25:15.892 [2024-11-19 09:27:16.563486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1232055 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1232055 /tmp/host.sock 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1232055 ']' 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:15.892 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 [2024-11-19 09:27:16.632998] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:25:15.892 [2024-11-19 09:27:16.633038] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232055 ] 00:25:15.892 [2024-11-19 09:27:16.706250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.892 [2024-11-19 09:27:16.747649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.892 09:27:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.271 [2024-11-19 09:27:17.889523] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:17.271 [2024-11-19 09:27:17.889542] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:17.271 [2024-11-19 09:27:17.889558] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:17.271 [2024-11-19 09:27:17.975829] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:17.271 [2024-11-19 09:27:18.030359] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:17.271 [2024-11-19 09:27:18.031140] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10d2af0:1 started. 00:25:17.271 [2024-11-19 09:27:18.032480] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:17.271 [2024-11-19 09:27:18.032519] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:17.271 [2024-11-19 09:27:18.032536] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:17.271 [2024-11-19 09:27:18.032549] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:17.271 [2024-11-19 09:27:18.032567] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.271 [2024-11-19 09:27:18.079567] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10d2af0 was disconnected and freed. delete nvme_qpair. 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:17.271 09:27:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:18.209 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.468 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:18.468 09:27:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:19.405 09:27:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:20.342 09:27:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:21.720 09:27:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.656 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:22.657 09:27:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:22.657 [2024-11-19 09:27:23.484121] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:22.657 [2024-11-19 09:27:23.484175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.657 [2024-11-19 09:27:23.484202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.657 [2024-11-19 09:27:23.484222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.657 [2024-11-19 09:27:23.484230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.657 [2024-11-19 09:27:23.484237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.657 [2024-11-19 09:27:23.484243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.657 [2024-11-19 09:27:23.484250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.657 [2024-11-19 09:27:23.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.657 [2024-11-19 09:27:23.484264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.657 [2024-11-19 09:27:23.484270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.657 [2024-11-19 09:27:23.484277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af320 is same with the state(6) to be set 00:25:22.657 [2024-11-19 09:27:23.494141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10af320 (9): Bad file descriptor 00:25:22.657 [2024-11-19 09:27:23.504181] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:22.657 [2024-11-19 09:27:23.504192] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:22.657 [2024-11-19 09:27:23.504196] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:22.657 [2024-11-19 09:27:23.504201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:22.657 [2024-11-19 09:27:23.504222] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:23.595 [2024-11-19 09:27:24.527980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:23.595 [2024-11-19 09:27:24.528059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10af320 with addr=10.0.0.2, port=4420 00:25:23.595 [2024-11-19 09:27:24.528091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10af320 is same with the state(6) to be set 00:25:23.595 [2024-11-19 09:27:24.528141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10af320 (9): Bad file descriptor 00:25:23.595 [2024-11-19 09:27:24.529096] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:23.595 [2024-11-19 09:27:24.529160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:23.595 [2024-11-19 09:27:24.529184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:23.595 [2024-11-19 09:27:24.529208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:23.595 [2024-11-19 09:27:24.529227] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:23.595 [2024-11-19 09:27:24.529243] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:23.595 [2024-11-19 09:27:24.529257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:23.595 [2024-11-19 09:27:24.529278] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:23.595 [2024-11-19 09:27:24.529292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:23.595 09:27:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:24.533 [2024-11-19 09:27:25.531811] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:24.533 [2024-11-19 09:27:25.531830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:24.533 [2024-11-19 09:27:25.531841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:24.533 [2024-11-19 09:27:25.531847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:24.533 [2024-11-19 09:27:25.531854] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:24.533 [2024-11-19 09:27:25.531860] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:24.533 [2024-11-19 09:27:25.531865] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:24.533 [2024-11-19 09:27:25.531869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:24.533 [2024-11-19 09:27:25.531893] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:24.533 [2024-11-19 09:27:25.531911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.533 [2024-11-19 09:27:25.531920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.533 [2024-11-19 09:27:25.531929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.533 [2024-11-19 09:27:25.531936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.533 [2024-11-19 09:27:25.531942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.533 [2024-11-19 09:27:25.531953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.533 [2024-11-19 09:27:25.531960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.533 [2024-11-19 09:27:25.531966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.533 [2024-11-19 09:27:25.531974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.533 [2024-11-19 09:27:25.531980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.533 [2024-11-19 09:27:25.531988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:24.533 [2024-11-19 09:27:25.532428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109ea00 (9): Bad file descriptor 00:25:24.533 [2024-11-19 09:27:25.533440] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:24.533 [2024-11-19 09:27:25.533451] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.533 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:24.793 09:27:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:25.731 09:27:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.667 [2024-11-19 09:27:27.585441] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:26.667 [2024-11-19 09:27:27.585458] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:26.667 [2024-11-19 09:27:27.585471] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:26.667 [2024-11-19 09:27:27.712874] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:26.927 09:27:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.927 [2024-11-19 09:27:27.936936] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:26.927 [2024-11-19 09:27:27.937594] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x10aa0d0:1 started. 00:25:26.927 [2024-11-19 09:27:27.938668] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:26.927 [2024-11-19 09:27:27.938700] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:26.927 [2024-11-19 09:27:27.938720] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:26.927 [2024-11-19 09:27:27.938736] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:26.927 [2024-11-19 09:27:27.938743] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:26.928 [2024-11-19 09:27:27.943959] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x10aa0d0 was disconnected and freed. delete nvme_qpair. 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1232055 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1232055 ']' 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1232055 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:27.864 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1232055 00:25:28.124 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:28.124 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:28.124 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1232055' 00:25:28.124 killing process with pid 1232055 00:25:28.124 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1232055 00:25:28.124 09:27:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1232055 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.124 rmmod nvme_tcp 00:25:28.124 rmmod nvme_fabrics 00:25:28.124 rmmod nvme_keyring 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1232029 ']' 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1232029 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1232029 ']' 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1232029 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:28.124 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1232029 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1232029' 00:25:28.383 killing process with pid 1232029 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1232029 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1232029 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.383 09:27:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.920 00:25:30.920 real 0m21.364s 00:25:30.920 user 0m26.578s 00:25:30.920 sys 0m5.842s 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.920 ************************************ 00:25:30.920 END TEST nvmf_discovery_remove_ifc 00:25:30.920 ************************************ 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.920 ************************************ 00:25:30.920 START TEST nvmf_identify_kernel_target 00:25:30.920 ************************************ 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:30.920 * Looking for test storage... 00:25:30.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.920 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.921 --rc genhtml_branch_coverage=1 00:25:30.921 --rc genhtml_function_coverage=1 00:25:30.921 --rc genhtml_legend=1 00:25:30.921 --rc geninfo_all_blocks=1 00:25:30.921 --rc geninfo_unexecuted_blocks=1 00:25:30.921 00:25:30.921 ' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.921 --rc genhtml_branch_coverage=1 00:25:30.921 --rc genhtml_function_coverage=1 00:25:30.921 --rc genhtml_legend=1 00:25:30.921 --rc geninfo_all_blocks=1 00:25:30.921 --rc geninfo_unexecuted_blocks=1 00:25:30.921 00:25:30.921 ' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.921 --rc genhtml_branch_coverage=1 00:25:30.921 --rc genhtml_function_coverage=1 00:25:30.921 --rc genhtml_legend=1 00:25:30.921 --rc geninfo_all_blocks=1 00:25:30.921 --rc geninfo_unexecuted_blocks=1 00:25:30.921 00:25:30.921 ' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:30.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.921 --rc genhtml_branch_coverage=1 00:25:30.921 --rc genhtml_function_coverage=1 00:25:30.921 --rc genhtml_legend=1 00:25:30.921 --rc geninfo_all_blocks=1 00:25:30.921 --rc geninfo_unexecuted_blocks=1 00:25:30.921 00:25:30.921 ' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.921 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.922 09:27:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.491 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.491 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.491 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:25:37.492 00:25:37.492 --- 10.0.0.2 ping statistics --- 00:25:37.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.492 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:25:37.492 00:25:37.492 --- 10.0.0.1 ping statistics --- 00:25:37.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.492 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:37.492 09:27:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:39.400 Waiting for block devices as requested 00:25:39.400 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:39.659 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:39.659 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:39.919 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:39.919 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:39.919 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:39.919 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.179 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.179 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.179 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:40.439 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:40.439 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:40.439 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:40.439 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:40.698 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:40.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:40.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:40.958 No valid GPT data, bailing 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:40.958 00:25:40.958 Discovery Log Number of Records 2, Generation counter 2 00:25:40.958 =====Discovery Log Entry 0====== 00:25:40.958 trtype: tcp 00:25:40.958 adrfam: ipv4 00:25:40.958 subtype: current discovery subsystem 00:25:40.958 treq: not specified, sq flow control disable supported 00:25:40.958 portid: 1 00:25:40.958 trsvcid: 4420 00:25:40.958 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:40.958 traddr: 10.0.0.1 00:25:40.958 eflags: none 00:25:40.958 sectype: none 00:25:40.958 =====Discovery Log Entry 1====== 00:25:40.958 trtype: tcp 00:25:40.958 adrfam: ipv4 00:25:40.958 subtype: nvme subsystem 00:25:40.958 treq: not specified, sq flow control disable supported 00:25:40.958 portid: 1 00:25:40.958 trsvcid: 4420 00:25:40.958 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:40.958 traddr: 10.0.0.1 00:25:40.958 eflags: none 00:25:40.958 sectype: none 00:25:40.958 09:27:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:40.958 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:41.219 ===================================================== 00:25:41.219 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:41.219 ===================================================== 00:25:41.219 Controller Capabilities/Features 00:25:41.219 ================================ 00:25:41.219 Vendor ID: 0000 00:25:41.219 Subsystem Vendor ID: 0000 00:25:41.219 Serial Number: ca8a90858853b9b1d464 00:25:41.219 Model Number: Linux 00:25:41.219 Firmware Version: 6.8.9-20 00:25:41.219 Recommended Arb Burst: 0 00:25:41.219 IEEE OUI Identifier: 00 00 00 00:25:41.219 Multi-path I/O 00:25:41.219 May have multiple subsystem ports: No 00:25:41.219 May have multiple controllers: No 00:25:41.219 Associated with SR-IOV VF: No 00:25:41.219 Max Data Transfer Size: Unlimited 00:25:41.219 Max Number of Namespaces: 0 00:25:41.219 Max Number of I/O Queues: 1024 00:25:41.219 NVMe Specification Version (VS): 1.3 00:25:41.219 NVMe Specification Version (Identify): 1.3 00:25:41.219 Maximum Queue Entries: 1024 00:25:41.219 Contiguous Queues Required: No 00:25:41.219 Arbitration Mechanisms Supported 00:25:41.219 Weighted Round Robin: Not Supported 00:25:41.219 Vendor Specific: Not Supported 00:25:41.219 Reset Timeout: 7500 ms 00:25:41.219 Doorbell Stride: 4 bytes 00:25:41.219 NVM Subsystem Reset: Not Supported 00:25:41.219 Command Sets Supported 00:25:41.219 NVM Command Set: Supported 00:25:41.219 Boot Partition: Not Supported 00:25:41.219 Memory Page Size Minimum: 4096 bytes 00:25:41.219 Memory Page Size Maximum: 4096 bytes 00:25:41.219 Persistent Memory Region: Not Supported 00:25:41.219 Optional Asynchronous Events Supported 00:25:41.219 Namespace Attribute Notices: Not Supported 00:25:41.219 Firmware Activation Notices: Not Supported 00:25:41.219 ANA Change Notices: Not Supported 00:25:41.219 PLE Aggregate Log Change Notices: Not Supported 00:25:41.219 LBA Status Info Alert Notices: Not Supported 00:25:41.219 EGE Aggregate Log Change Notices: Not Supported 00:25:41.219 Normal NVM Subsystem Shutdown event: Not Supported 00:25:41.219 Zone Descriptor Change Notices: Not Supported 00:25:41.219 Discovery Log Change Notices: Supported 00:25:41.219 Controller Attributes 00:25:41.219 128-bit Host Identifier: Not Supported 00:25:41.219 Non-Operational Permissive Mode: Not Supported 00:25:41.219 NVM Sets: Not Supported 00:25:41.219 Read Recovery Levels: Not Supported 00:25:41.219 Endurance Groups: Not Supported 00:25:41.219 Predictable Latency Mode: Not Supported 00:25:41.219 Traffic Based Keep ALive: Not Supported 00:25:41.219 Namespace Granularity: Not Supported 00:25:41.219 SQ Associations: Not Supported 00:25:41.219 UUID List: Not Supported 00:25:41.219 Multi-Domain Subsystem: Not Supported 00:25:41.219 Fixed Capacity Management: Not Supported 00:25:41.219 Variable Capacity Management: Not Supported 00:25:41.219 Delete Endurance Group: Not Supported 00:25:41.219 Delete NVM Set: Not Supported 00:25:41.219 Extended LBA Formats Supported: Not Supported 00:25:41.219 Flexible Data Placement Supported: Not Supported 00:25:41.219 00:25:41.219 Controller Memory Buffer Support 00:25:41.219 ================================ 00:25:41.219 Supported: No 00:25:41.219 00:25:41.219 Persistent Memory Region Support 00:25:41.219 ================================ 00:25:41.219 Supported: No 00:25:41.219 00:25:41.219 Admin Command Set Attributes 00:25:41.219 ============================ 00:25:41.219 Security Send/Receive: Not Supported 00:25:41.219 Format NVM: Not Supported 00:25:41.219 Firmware Activate/Download: Not Supported 00:25:41.219 Namespace Management: Not Supported 00:25:41.219 Device Self-Test: Not Supported 00:25:41.219 Directives: Not Supported 00:25:41.219 NVMe-MI: Not Supported 00:25:41.219 Virtualization Management: Not Supported 00:25:41.219 Doorbell Buffer Config: Not Supported 00:25:41.219 Get LBA Status Capability: Not Supported 00:25:41.219 Command & Feature Lockdown Capability: Not Supported 00:25:41.219 Abort Command Limit: 1 00:25:41.219 Async Event Request Limit: 1 00:25:41.219 Number of Firmware Slots: N/A 00:25:41.219 Firmware Slot 1 Read-Only: N/A 00:25:41.219 Firmware Activation Without Reset: N/A 00:25:41.219 Multiple Update Detection Support: N/A 00:25:41.219 Firmware Update Granularity: No Information Provided 00:25:41.219 Per-Namespace SMART Log: No 00:25:41.219 Asymmetric Namespace Access Log Page: Not Supported 00:25:41.219 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:41.219 Command Effects Log Page: Not Supported 00:25:41.219 Get Log Page Extended Data: Supported 00:25:41.219 Telemetry Log Pages: Not Supported 00:25:41.219 Persistent Event Log Pages: Not Supported 00:25:41.219 Supported Log Pages Log Page: May Support 00:25:41.219 Commands Supported & Effects Log Page: Not Supported 00:25:41.219 Feature Identifiers & Effects Log Page:May Support 00:25:41.219 NVMe-MI Commands & Effects Log Page: May Support 00:25:41.219 Data Area 4 for Telemetry Log: Not Supported 00:25:41.219 Error Log Page Entries Supported: 1 00:25:41.219 Keep Alive: Not Supported 00:25:41.219 00:25:41.219 NVM Command Set Attributes 00:25:41.219 ========================== 00:25:41.219 Submission Queue Entry Size 00:25:41.219 Max: 1 00:25:41.219 Min: 1 00:25:41.219 Completion Queue Entry Size 00:25:41.219 Max: 1 00:25:41.219 Min: 1 00:25:41.219 Number of Namespaces: 0 00:25:41.219 Compare Command: Not Supported 00:25:41.219 Write Uncorrectable Command: Not Supported 00:25:41.219 Dataset Management Command: Not Supported 00:25:41.219 Write Zeroes Command: Not Supported 00:25:41.219 Set Features Save Field: Not Supported 00:25:41.219 Reservations: Not Supported 00:25:41.219 Timestamp: Not Supported 00:25:41.219 Copy: Not Supported 00:25:41.219 Volatile Write Cache: Not Present 00:25:41.219 Atomic Write Unit (Normal): 1 00:25:41.219 Atomic Write Unit (PFail): 1 00:25:41.219 Atomic Compare & Write Unit: 1 00:25:41.219 Fused Compare & Write: Not Supported 00:25:41.219 Scatter-Gather List 00:25:41.219 SGL Command Set: Supported 00:25:41.219 SGL Keyed: Not Supported 00:25:41.219 SGL Bit Bucket Descriptor: Not Supported 00:25:41.219 SGL Metadata Pointer: Not Supported 00:25:41.219 Oversized SGL: Not Supported 00:25:41.219 SGL Metadata Address: Not Supported 00:25:41.219 SGL Offset: Supported 00:25:41.219 Transport SGL Data Block: Not Supported 00:25:41.219 Replay Protected Memory Block: Not Supported 00:25:41.219 00:25:41.219 Firmware Slot Information 00:25:41.219 ========================= 00:25:41.219 Active slot: 0 00:25:41.219 00:25:41.219 00:25:41.219 Error Log 00:25:41.219 ========= 00:25:41.219 00:25:41.219 Active Namespaces 00:25:41.219 ================= 00:25:41.219 Discovery Log Page 00:25:41.219 ================== 00:25:41.220 Generation Counter: 2 00:25:41.220 Number of Records: 2 00:25:41.220 Record Format: 0 00:25:41.220 00:25:41.220 Discovery Log Entry 0 00:25:41.220 ---------------------- 00:25:41.220 Transport Type: 3 (TCP) 00:25:41.220 Address Family: 1 (IPv4) 00:25:41.220 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:41.220 Entry Flags: 00:25:41.220 Duplicate Returned Information: 0 00:25:41.220 Explicit Persistent Connection Support for Discovery: 0 00:25:41.220 Transport Requirements: 00:25:41.220 Secure Channel: Not Specified 00:25:41.220 Port ID: 1 (0x0001) 00:25:41.220 Controller ID: 65535 (0xffff) 00:25:41.220 Admin Max SQ Size: 32 00:25:41.220 Transport Service Identifier: 4420 00:25:41.220 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:41.220 Transport Address: 10.0.0.1 00:25:41.220 Discovery Log Entry 1 00:25:41.220 ---------------------- 00:25:41.220 Transport Type: 3 (TCP) 00:25:41.220 Address Family: 1 (IPv4) 00:25:41.220 Subsystem Type: 2 (NVM Subsystem) 00:25:41.220 Entry Flags: 00:25:41.220 Duplicate Returned Information: 0 00:25:41.220 Explicit Persistent Connection Support for Discovery: 0 00:25:41.220 Transport Requirements: 00:25:41.220 Secure Channel: Not Specified 00:25:41.220 Port ID: 1 (0x0001) 00:25:41.220 Controller ID: 65535 (0xffff) 00:25:41.220 Admin Max SQ Size: 32 00:25:41.220 Transport Service Identifier: 4420 00:25:41.220 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:41.220 Transport Address: 10.0.0.1 00:25:41.220 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:41.220 get_feature(0x01) failed 00:25:41.220 get_feature(0x02) failed 00:25:41.220 get_feature(0x04) failed 00:25:41.220 ===================================================== 00:25:41.220 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:41.220 ===================================================== 00:25:41.220 Controller Capabilities/Features 00:25:41.220 ================================ 00:25:41.220 Vendor ID: 0000 00:25:41.220 Subsystem Vendor ID: 0000 00:25:41.220 Serial Number: 1a4305ad0041e6c0adc2 00:25:41.220 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:41.220 Firmware Version: 6.8.9-20 00:25:41.220 Recommended Arb Burst: 6 00:25:41.220 IEEE OUI Identifier: 00 00 00 00:25:41.220 Multi-path I/O 00:25:41.220 May have multiple subsystem ports: Yes 00:25:41.220 May have multiple controllers: Yes 00:25:41.220 Associated with SR-IOV VF: No 00:25:41.220 Max Data Transfer Size: Unlimited 00:25:41.220 Max Number of Namespaces: 1024 00:25:41.220 Max Number of I/O Queues: 128 00:25:41.220 NVMe Specification Version (VS): 1.3 00:25:41.220 NVMe Specification Version (Identify): 1.3 00:25:41.220 Maximum Queue Entries: 1024 00:25:41.220 Contiguous Queues Required: No 00:25:41.220 Arbitration Mechanisms Supported 00:25:41.220 Weighted Round Robin: Not Supported 00:25:41.220 Vendor Specific: Not Supported 00:25:41.220 Reset Timeout: 7500 ms 00:25:41.220 Doorbell Stride: 4 bytes 00:25:41.220 NVM Subsystem Reset: Not Supported 00:25:41.220 Command Sets Supported 00:25:41.220 NVM Command Set: Supported 00:25:41.220 Boot Partition: Not Supported 00:25:41.220 Memory Page Size Minimum: 4096 bytes 00:25:41.220 Memory Page Size Maximum: 4096 bytes 00:25:41.220 Persistent Memory Region: Not Supported 00:25:41.220 Optional Asynchronous Events Supported 00:25:41.220 Namespace Attribute Notices: Supported 00:25:41.220 Firmware Activation Notices: Not Supported 00:25:41.220 ANA Change Notices: Supported 00:25:41.220 PLE Aggregate Log Change Notices: Not Supported 00:25:41.220 LBA Status Info Alert Notices: Not Supported 00:25:41.220 EGE Aggregate Log Change Notices: Not Supported 00:25:41.220 Normal NVM Subsystem Shutdown event: Not Supported 00:25:41.220 Zone Descriptor Change Notices: Not Supported 00:25:41.220 Discovery Log Change Notices: Not Supported 00:25:41.220 Controller Attributes 00:25:41.220 128-bit Host Identifier: Supported 00:25:41.220 Non-Operational Permissive Mode: Not Supported 00:25:41.220 NVM Sets: Not Supported 00:25:41.220 Read Recovery Levels: Not Supported 00:25:41.220 Endurance Groups: Not Supported 00:25:41.220 Predictable Latency Mode: Not Supported 00:25:41.220 Traffic Based Keep ALive: Supported 00:25:41.220 Namespace Granularity: Not Supported 00:25:41.220 SQ Associations: Not Supported 00:25:41.220 UUID List: Not Supported 00:25:41.220 Multi-Domain Subsystem: Not Supported 00:25:41.220 Fixed Capacity Management: Not Supported 00:25:41.220 Variable Capacity Management: Not Supported 00:25:41.220 Delete Endurance Group: Not Supported 00:25:41.220 Delete NVM Set: Not Supported 00:25:41.220 Extended LBA Formats Supported: Not Supported 00:25:41.220 Flexible Data Placement Supported: Not Supported 00:25:41.220 00:25:41.220 Controller Memory Buffer Support 00:25:41.220 ================================ 00:25:41.220 Supported: No 00:25:41.220 00:25:41.220 Persistent Memory Region Support 00:25:41.220 ================================ 00:25:41.220 Supported: No 00:25:41.220 00:25:41.220 Admin Command Set Attributes 00:25:41.220 ============================ 00:25:41.220 Security Send/Receive: Not Supported 00:25:41.220 Format NVM: Not Supported 00:25:41.220 Firmware Activate/Download: Not Supported 00:25:41.220 Namespace Management: Not Supported 00:25:41.220 Device Self-Test: Not Supported 00:25:41.220 Directives: Not Supported 00:25:41.220 NVMe-MI: Not Supported 00:25:41.220 Virtualization Management: Not Supported 00:25:41.220 Doorbell Buffer Config: Not Supported 00:25:41.220 Get LBA Status Capability: Not Supported 00:25:41.220 Command & Feature Lockdown Capability: Not Supported 00:25:41.220 Abort Command Limit: 4 00:25:41.220 Async Event Request Limit: 4 00:25:41.220 Number of Firmware Slots: N/A 00:25:41.220 Firmware Slot 1 Read-Only: N/A 00:25:41.220 Firmware Activation Without Reset: N/A 00:25:41.220 Multiple Update Detection Support: N/A 00:25:41.220 Firmware Update Granularity: No Information Provided 00:25:41.220 Per-Namespace SMART Log: Yes 00:25:41.220 Asymmetric Namespace Access Log Page: Supported 00:25:41.220 ANA Transition Time : 10 sec 00:25:41.220 00:25:41.220 Asymmetric Namespace Access Capabilities 00:25:41.220 ANA Optimized State : Supported 00:25:41.220 ANA Non-Optimized State : Supported 00:25:41.220 ANA Inaccessible State : Supported 00:25:41.220 ANA Persistent Loss State : Supported 00:25:41.220 ANA Change State : Supported 00:25:41.220 ANAGRPID is not changed : No 00:25:41.220 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:41.220 00:25:41.220 ANA Group Identifier Maximum : 128 00:25:41.220 Number of ANA Group Identifiers : 128 00:25:41.220 Max Number of Allowed Namespaces : 1024 00:25:41.220 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:41.220 Command Effects Log Page: Supported 00:25:41.220 Get Log Page Extended Data: Supported 00:25:41.220 Telemetry Log Pages: Not Supported 00:25:41.220 Persistent Event Log Pages: Not Supported 00:25:41.220 Supported Log Pages Log Page: May Support 00:25:41.220 Commands Supported & Effects Log Page: Not Supported 00:25:41.220 Feature Identifiers & Effects Log Page:May Support 00:25:41.220 NVMe-MI Commands & Effects Log Page: May Support 00:25:41.220 Data Area 4 for Telemetry Log: Not Supported 00:25:41.220 Error Log Page Entries Supported: 128 00:25:41.220 Keep Alive: Supported 00:25:41.220 Keep Alive Granularity: 1000 ms 00:25:41.220 00:25:41.220 NVM Command Set Attributes 00:25:41.220 ========================== 00:25:41.220 Submission Queue Entry Size 00:25:41.220 Max: 64 00:25:41.220 Min: 64 00:25:41.220 Completion Queue Entry Size 00:25:41.220 Max: 16 00:25:41.220 Min: 16 00:25:41.220 Number of Namespaces: 1024 00:25:41.220 Compare Command: Not Supported 00:25:41.220 Write Uncorrectable Command: Not Supported 00:25:41.220 Dataset Management Command: Supported 00:25:41.220 Write Zeroes Command: Supported 00:25:41.221 Set Features Save Field: Not Supported 00:25:41.221 Reservations: Not Supported 00:25:41.221 Timestamp: Not Supported 00:25:41.221 Copy: Not Supported 00:25:41.221 Volatile Write Cache: Present 00:25:41.221 Atomic Write Unit (Normal): 1 00:25:41.221 Atomic Write Unit (PFail): 1 00:25:41.221 Atomic Compare & Write Unit: 1 00:25:41.221 Fused Compare & Write: Not Supported 00:25:41.221 Scatter-Gather List 00:25:41.221 SGL Command Set: Supported 00:25:41.221 SGL Keyed: Not Supported 00:25:41.221 SGL Bit Bucket Descriptor: Not Supported 00:25:41.221 SGL Metadata Pointer: Not Supported 00:25:41.221 Oversized SGL: Not Supported 00:25:41.221 SGL Metadata Address: Not Supported 00:25:41.221 SGL Offset: Supported 00:25:41.221 Transport SGL Data Block: Not Supported 00:25:41.221 Replay Protected Memory Block: Not Supported 00:25:41.221 00:25:41.221 Firmware Slot Information 00:25:41.221 ========================= 00:25:41.221 Active slot: 0 00:25:41.221 00:25:41.221 Asymmetric Namespace Access 00:25:41.221 =========================== 00:25:41.221 Change Count : 0 00:25:41.221 Number of ANA Group Descriptors : 1 00:25:41.221 ANA Group Descriptor : 0 00:25:41.221 ANA Group ID : 1 00:25:41.221 Number of NSID Values : 1 00:25:41.221 Change Count : 0 00:25:41.221 ANA State : 1 00:25:41.221 Namespace Identifier : 1 00:25:41.221 00:25:41.221 Commands Supported and Effects 00:25:41.221 ============================== 00:25:41.221 Admin Commands 00:25:41.221 -------------- 00:25:41.221 Get Log Page (02h): Supported 00:25:41.221 Identify (06h): Supported 00:25:41.221 Abort (08h): Supported 00:25:41.221 Set Features (09h): Supported 00:25:41.221 Get Features (0Ah): Supported 00:25:41.221 Asynchronous Event Request (0Ch): Supported 00:25:41.221 Keep Alive (18h): Supported 00:25:41.221 I/O Commands 00:25:41.221 ------------ 00:25:41.221 Flush (00h): Supported 00:25:41.221 Write (01h): Supported LBA-Change 00:25:41.221 Read (02h): Supported 00:25:41.221 Write Zeroes (08h): Supported LBA-Change 00:25:41.221 Dataset Management (09h): Supported 00:25:41.221 00:25:41.221 Error Log 00:25:41.221 ========= 00:25:41.221 Entry: 0 00:25:41.221 Error Count: 0x3 00:25:41.221 Submission Queue Id: 0x0 00:25:41.221 Command Id: 0x5 00:25:41.221 Phase Bit: 0 00:25:41.221 Status Code: 0x2 00:25:41.221 Status Code Type: 0x0 00:25:41.221 Do Not Retry: 1 00:25:41.221 Error Location: 0x28 00:25:41.221 LBA: 0x0 00:25:41.221 Namespace: 0x0 00:25:41.221 Vendor Log Page: 0x0 00:25:41.221 ----------- 00:25:41.221 Entry: 1 00:25:41.221 Error Count: 0x2 00:25:41.221 Submission Queue Id: 0x0 00:25:41.221 Command Id: 0x5 00:25:41.221 Phase Bit: 0 00:25:41.221 Status Code: 0x2 00:25:41.221 Status Code Type: 0x0 00:25:41.221 Do Not Retry: 1 00:25:41.221 Error Location: 0x28 00:25:41.221 LBA: 0x0 00:25:41.221 Namespace: 0x0 00:25:41.221 Vendor Log Page: 0x0 00:25:41.221 ----------- 00:25:41.221 Entry: 2 00:25:41.221 Error Count: 0x1 00:25:41.221 Submission Queue Id: 0x0 00:25:41.221 Command Id: 0x4 00:25:41.221 Phase Bit: 0 00:25:41.221 Status Code: 0x2 00:25:41.221 Status Code Type: 0x0 00:25:41.221 Do Not Retry: 1 00:25:41.221 Error Location: 0x28 00:25:41.221 LBA: 0x0 00:25:41.221 Namespace: 0x0 00:25:41.221 Vendor Log Page: 0x0 00:25:41.221 00:25:41.221 Number of Queues 00:25:41.221 ================ 00:25:41.221 Number of I/O Submission Queues: 128 00:25:41.221 Number of I/O Completion Queues: 128 00:25:41.221 00:25:41.221 ZNS Specific Controller Data 00:25:41.221 ============================ 00:25:41.221 Zone Append Size Limit: 0 00:25:41.221 00:25:41.221 00:25:41.221 Active Namespaces 00:25:41.221 ================= 00:25:41.221 get_feature(0x05) failed 00:25:41.221 Namespace ID:1 00:25:41.221 Command Set Identifier: NVM (00h) 00:25:41.221 Deallocate: Supported 00:25:41.221 Deallocated/Unwritten Error: Not Supported 00:25:41.221 Deallocated Read Value: Unknown 00:25:41.221 Deallocate in Write Zeroes: Not Supported 00:25:41.221 Deallocated Guard Field: 0xFFFF 00:25:41.221 Flush: Supported 00:25:41.221 Reservation: Not Supported 00:25:41.221 Namespace Sharing Capabilities: Multiple Controllers 00:25:41.221 Size (in LBAs): 1953525168 (931GiB) 00:25:41.221 Capacity (in LBAs): 1953525168 (931GiB) 00:25:41.221 Utilization (in LBAs): 1953525168 (931GiB) 00:25:41.221 UUID: 3009be2b-c285-484c-b190-d64257f514d9 00:25:41.221 Thin Provisioning: Not Supported 00:25:41.221 Per-NS Atomic Units: Yes 00:25:41.221 Atomic Boundary Size (Normal): 0 00:25:41.221 Atomic Boundary Size (PFail): 0 00:25:41.221 Atomic Boundary Offset: 0 00:25:41.221 NGUID/EUI64 Never Reused: No 00:25:41.221 ANA group ID: 1 00:25:41.221 Namespace Write Protected: No 00:25:41.221 Number of LBA Formats: 1 00:25:41.221 Current LBA Format: LBA Format #00 00:25:41.221 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:41.221 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.221 rmmod nvme_tcp 00:25:41.221 rmmod nvme_fabrics 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.221 09:27:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:43.761 09:27:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.298 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:46.298 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:47.235 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:47.235 00:25:47.235 real 0m16.681s 00:25:47.235 user 0m4.367s 00:25:47.235 sys 0m8.714s 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:47.235 ************************************ 00:25:47.235 END TEST nvmf_identify_kernel_target 00:25:47.235 ************************************ 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.235 ************************************ 00:25:47.235 START TEST nvmf_auth_host 00:25:47.235 ************************************ 00:25:47.235 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:47.495 * Looking for test storage... 00:25:47.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:47.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.495 --rc genhtml_branch_coverage=1 00:25:47.495 --rc genhtml_function_coverage=1 00:25:47.495 --rc genhtml_legend=1 00:25:47.495 --rc geninfo_all_blocks=1 00:25:47.495 --rc geninfo_unexecuted_blocks=1 00:25:47.495 00:25:47.495 ' 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:47.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.495 --rc genhtml_branch_coverage=1 00:25:47.495 --rc genhtml_function_coverage=1 00:25:47.495 --rc genhtml_legend=1 00:25:47.495 --rc geninfo_all_blocks=1 00:25:47.495 --rc geninfo_unexecuted_blocks=1 00:25:47.495 00:25:47.495 ' 00:25:47.495 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:47.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.496 --rc genhtml_branch_coverage=1 00:25:47.496 --rc genhtml_function_coverage=1 00:25:47.496 --rc genhtml_legend=1 00:25:47.496 --rc geninfo_all_blocks=1 00:25:47.496 --rc geninfo_unexecuted_blocks=1 00:25:47.496 00:25:47.496 ' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:47.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.496 --rc genhtml_branch_coverage=1 00:25:47.496 --rc genhtml_function_coverage=1 00:25:47.496 --rc genhtml_legend=1 00:25:47.496 --rc geninfo_all_blocks=1 00:25:47.496 --rc geninfo_unexecuted_blocks=1 00:25:47.496 00:25:47.496 ' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.496 09:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:54.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:54.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:54.072 Found net devices under 0000:86:00.0: cvl_0_0 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:54.072 Found net devices under 0000:86:00.1: cvl_0_1 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.072 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:25:54.073 00:25:54.073 --- 10.0.0.2 ping statistics --- 00:25:54.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.073 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:25:54.073 00:25:54.073 --- 10.0.0.1 ping statistics --- 00:25:54.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.073 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1244048 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1244048 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1244048 ']' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9a45b838897d2670314d718f3e7d6bd5 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.zLI 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9a45b838897d2670314d718f3e7d6bd5 0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9a45b838897d2670314d718f3e7d6bd5 0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9a45b838897d2670314d718f3e7d6bd5 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.zLI 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.zLI 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.zLI 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2bd199904589eff68abb7679aa026dc3630ef05bd7de892d40c235c29f2e7f67 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0mq 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2bd199904589eff68abb7679aa026dc3630ef05bd7de892d40c235c29f2e7f67 3 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2bd199904589eff68abb7679aa026dc3630ef05bd7de892d40c235c29f2e7f67 3 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2bd199904589eff68abb7679aa026dc3630ef05bd7de892d40c235c29f2e7f67 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0mq 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0mq 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0mq 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c1f5d7429c1ab23e9bb5ee108cff209961b11a38afebbab6 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ll7 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c1f5d7429c1ab23e9bb5ee108cff209961b11a38afebbab6 0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c1f5d7429c1ab23e9bb5ee108cff209961b11a38afebbab6 0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c1f5d7429c1ab23e9bb5ee108cff209961b11a38afebbab6 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ll7 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ll7 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ll7 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e8ce0f1250b8be153189fb948232f1f82c732159805f2816 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:54.073 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XHH 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e8ce0f1250b8be153189fb948232f1f82c732159805f2816 2 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e8ce0f1250b8be153189fb948232f1f82c732159805f2816 2 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e8ce0f1250b8be153189fb948232f1f82c732159805f2816 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XHH 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XHH 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XHH 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7ae6a044821991d08981ef27c0eb7f3a 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ntQ 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7ae6a044821991d08981ef27c0eb7f3a 1 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7ae6a044821991d08981ef27c0eb7f3a 1 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7ae6a044821991d08981ef27c0eb7f3a 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ntQ 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ntQ 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ntQ 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=436c7e32ee63099e96e51e8232721b88 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.G5R 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 436c7e32ee63099e96e51e8232721b88 1 00:25:54.074 09:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 436c7e32ee63099e96e51e8232721b88 1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=436c7e32ee63099e96e51e8232721b88 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.G5R 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.G5R 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.G5R 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d8c9621d97d56b2b1abc8ea1846426597b9717ae8b826180 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.QJz 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d8c9621d97d56b2b1abc8ea1846426597b9717ae8b826180 2 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d8c9621d97d56b2b1abc8ea1846426597b9717ae8b826180 2 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d8c9621d97d56b2b1abc8ea1846426597b9717ae8b826180 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.QJz 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.QJz 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.QJz 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3eb5ff74b6f0fda5a0505f4b54e96bf1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3jB 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3eb5ff74b6f0fda5a0505f4b54e96bf1 0 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3eb5ff74b6f0fda5a0505f4b54e96bf1 0 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3eb5ff74b6f0fda5a0505f4b54e96bf1 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:54.074 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3jB 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3jB 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3jB 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=994b10384c0fb188b4c91c78516968525c5f493d6f2917a2ee6bf26a90b52491 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ohZ 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 994b10384c0fb188b4c91c78516968525c5f493d6f2917a2ee6bf26a90b52491 3 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 994b10384c0fb188b4c91c78516968525c5f493d6f2917a2ee6bf26a90b52491 3 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=994b10384c0fb188b4c91c78516968525c5f493d6f2917a2ee6bf26a90b52491 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ohZ 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ohZ 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ohZ 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1244048 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1244048 ']' 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.335 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zLI 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0mq ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0mq 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ll7 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XHH ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XHH 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ntQ 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.G5R ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G5R 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.QJz 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3jB ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3jB 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ohZ 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:54.595 09:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:57.132 Waiting for block devices as requested 00:25:57.132 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:57.391 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.391 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:57.650 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:57.650 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:57.650 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:57.650 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:57.910 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:57.910 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:57.910 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:57.910 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:58.170 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:58.170 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:58.170 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:58.428 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:58.428 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:58.428 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:58.996 09:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:58.996 No valid GPT data, bailing 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:58.996 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:59.257 00:25:59.257 Discovery Log Number of Records 2, Generation counter 2 00:25:59.257 =====Discovery Log Entry 0====== 00:25:59.257 trtype: tcp 00:25:59.257 adrfam: ipv4 00:25:59.257 subtype: current discovery subsystem 00:25:59.257 treq: not specified, sq flow control disable supported 00:25:59.257 portid: 1 00:25:59.257 trsvcid: 4420 00:25:59.257 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:59.257 traddr: 10.0.0.1 00:25:59.257 eflags: none 00:25:59.257 sectype: none 00:25:59.257 =====Discovery Log Entry 1====== 00:25:59.257 trtype: tcp 00:25:59.257 adrfam: ipv4 00:25:59.257 subtype: nvme subsystem 00:25:59.257 treq: not specified, sq flow control disable supported 00:25:59.257 portid: 1 00:25:59.257 trsvcid: 4420 00:25:59.257 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:59.257 traddr: 10.0.0.1 00:25:59.257 eflags: none 00:25:59.257 sectype: none 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.257 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.517 nvme0n1 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:25:59.517 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.518 nvme0n1 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.518 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:59.778 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.779 nvme0n1 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.779 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.039 nvme0n1 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.039 09:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.039 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.299 nvme0n1 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.299 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.559 nvme0n1 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.559 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.560 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.820 nvme0n1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.820 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.080 nvme0n1 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.080 09:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.080 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 nvme0n1 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.340 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.341 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.600 nvme0n1 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.600 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.859 nvme0n1 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.859 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.860 09:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 nvme0n1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.120 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 nvme0n1 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.380 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.640 nvme0n1 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.640 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.900 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.160 nvme0n1 00:26:03.160 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.160 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.160 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.160 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.160 09:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:03.160 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.161 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.420 nvme0n1 00:26:03.420 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.420 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.420 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.420 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.420 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.421 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.989 nvme0n1 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.989 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.990 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.990 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:03.990 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.990 09:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.249 nvme0n1 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:04.249 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.250 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.509 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.509 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.509 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 nvme0n1 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.768 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.769 09:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 nvme0n1 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.337 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.597 nvme0n1 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.597 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.855 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.855 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.855 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:05.855 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.856 09:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.424 nvme0n1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.424 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.992 nvme0n1 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.992 09:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.561 nvme0n1 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.561 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.821 09:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.388 nvme0n1 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.388 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.957 nvme0n1 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.957 09:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.216 nvme0n1 00:26:09.216 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.216 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.217 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.476 nvme0n1 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.476 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.477 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.736 nvme0n1 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:09.736 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.737 nvme0n1 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.737 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.996 nvme0n1 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.996 09:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.996 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.996 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.996 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.996 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.996 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.255 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.256 nvme0n1 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:10.256 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.515 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.516 nvme0n1 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.516 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.775 nvme0n1 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.775 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.776 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.036 nvme0n1 00:26:11.036 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.036 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.036 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.036 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.036 09:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.036 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.295 nvme0n1 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.295 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.296 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.555 nvme0n1 00:26:11.555 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.555 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.555 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.555 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.555 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.555 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:11.814 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.815 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.074 nvme0n1 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.074 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.075 09:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 nvme0n1 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.334 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 nvme0n1 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.593 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.852 nvme0n1 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.852 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.853 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.853 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.853 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.112 09:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.372 nvme0n1 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.372 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.631 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.632 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:13.632 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.632 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.891 nvme0n1 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.891 09:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.460 nvme0n1 00:26:14.460 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.460 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.461 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.721 nvme0n1 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.721 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.980 09:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.240 nvme0n1 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.240 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.825 nvme0n1 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.825 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.105 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.106 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.106 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.106 09:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.724 nvme0n1 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.724 09:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 nvme0n1 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.291 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.292 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.859 nvme0n1 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.859 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.860 09:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.426 nvme0n1 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.426 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.684 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.684 nvme0n1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.685 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.943 nvme0n1 00:26:18.943 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.943 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.943 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.943 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.943 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.944 09:28:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.202 nvme0n1 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.202 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.203 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 nvme0n1 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.462 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.722 nvme0n1 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.722 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.982 nvme0n1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.982 09:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.242 nvme0n1 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.242 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.502 nvme0n1 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.502 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 nvme0n1 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.762 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.021 nvme0n1 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.021 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.022 09:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.281 nvme0n1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.281 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.540 nvme0n1 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.540 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.799 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.799 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.799 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.800 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 nvme0n1 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.059 09:28:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.318 nvme0n1 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.318 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.577 nvme0n1 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.577 09:28:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.145 nvme0n1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.145 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.714 nvme0n1 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.714 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.973 nvme0n1 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.973 09:28:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.973 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.540 nvme0n1 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.540 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.541 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.799 nvme0n1 00:26:24.799 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.799 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.799 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.799 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.799 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.799 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE0NWI4Mzg4OTdkMjY3MDMxNGQ3MThmM2U3ZDZiZDX6KCPq: 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: ]] 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmJkMTk5OTA0NTg5ZWZmNjhhYmI3Njc5YWEwMjZkYzM2MzBlZjA1YmQ3ZGU4OTJkNDBjMjM1YzI5ZjJlN2Y2NwXBomk=: 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:25.058 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.059 09:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.626 nvme0n1 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:25.626 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.627 09:28:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.198 nvme0n1 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.198 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.134 nvme0n1 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.134 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDhjOTYyMWQ5N2Q1NmIyYjFhYmM4ZWExODQ2NDI2NTk3Yjk3MTdhZThiODI2MTgwrVvV2g==: 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: ]] 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2ViNWZmNzRiNmYwZmRhNWEwNTA1ZjRiNTRlOTZiZjGrNcUf: 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.135 09:28:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.701 nvme0n1 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTk0YjEwMzg0YzBmYjE4OGI0YzkxYzc4NTE2OTY4NTI1YzVmNDkzZDZmMjkxN2EyZWU2YmYyNmE5MGI1MjQ5MfBQyYg=: 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.701 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.702 09:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.269 nvme0n1 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.269 request: 00:26:28.269 { 00:26:28.269 "name": "nvme0", 00:26:28.269 "trtype": "tcp", 00:26:28.269 "traddr": "10.0.0.1", 00:26:28.269 "adrfam": "ipv4", 00:26:28.269 "trsvcid": "4420", 00:26:28.269 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.269 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.269 "prchk_reftag": false, 00:26:28.269 "prchk_guard": false, 00:26:28.269 "hdgst": false, 00:26:28.269 "ddgst": false, 00:26:28.269 "allow_unrecognized_csi": false, 00:26:28.269 "method": "bdev_nvme_attach_controller", 00:26:28.269 "req_id": 1 00:26:28.269 } 00:26:28.269 Got JSON-RPC error response 00:26:28.269 response: 00:26:28.269 { 00:26:28.269 "code": -5, 00:26:28.269 "message": "Input/output error" 00:26:28.269 } 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.269 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.528 request: 00:26:28.528 { 00:26:28.528 "name": "nvme0", 00:26:28.528 "trtype": "tcp", 00:26:28.528 "traddr": "10.0.0.1", 00:26:28.528 "adrfam": "ipv4", 00:26:28.528 "trsvcid": "4420", 00:26:28.528 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.528 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.528 "prchk_reftag": false, 00:26:28.528 "prchk_guard": false, 00:26:28.528 "hdgst": false, 00:26:28.528 "ddgst": false, 00:26:28.528 "dhchap_key": "key2", 00:26:28.528 "allow_unrecognized_csi": false, 00:26:28.528 "method": "bdev_nvme_attach_controller", 00:26:28.528 "req_id": 1 00:26:28.528 } 00:26:28.528 Got JSON-RPC error response 00:26:28.528 response: 00:26:28.528 { 00:26:28.528 "code": -5, 00:26:28.528 "message": "Input/output error" 00:26:28.528 } 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.528 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.529 request: 00:26:28.529 { 00:26:28.529 "name": "nvme0", 00:26:28.529 "trtype": "tcp", 00:26:28.529 "traddr": "10.0.0.1", 00:26:28.529 "adrfam": "ipv4", 00:26:28.529 "trsvcid": "4420", 00:26:28.529 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:28.529 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:28.529 "prchk_reftag": false, 00:26:28.529 "prchk_guard": false, 00:26:28.529 "hdgst": false, 00:26:28.529 "ddgst": false, 00:26:28.529 "dhchap_key": "key1", 00:26:28.529 "dhchap_ctrlr_key": "ckey2", 00:26:28.529 "allow_unrecognized_csi": false, 00:26:28.529 "method": "bdev_nvme_attach_controller", 00:26:28.529 "req_id": 1 00:26:28.529 } 00:26:28.529 Got JSON-RPC error response 00:26:28.529 response: 00:26:28.529 { 00:26:28.529 "code": -5, 00:26:28.529 "message": "Input/output error" 00:26:28.529 } 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.529 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.788 nvme0n1 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.788 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.047 request: 00:26:29.047 { 00:26:29.047 "name": "nvme0", 00:26:29.047 "dhchap_key": "key1", 00:26:29.047 "dhchap_ctrlr_key": "ckey2", 00:26:29.047 "method": "bdev_nvme_set_keys", 00:26:29.047 "req_id": 1 00:26:29.047 } 00:26:29.047 Got JSON-RPC error response 00:26:29.047 response: 00:26:29.047 { 00:26:29.047 "code": -13, 00:26:29.047 "message": "Permission denied" 00:26:29.047 } 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:29.047 09:28:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:29.983 09:28:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:30.918 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.918 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:30.918 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.918 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.918 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFmNWQ3NDI5YzFhYjIzZTliYjVlZTEwOGNmZjIwOTk2MWIxMWEzOGFmZWJiYWI2xuoyBw==: 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: ]] 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZThjZTBmMTI1MGI4YmUxNTMxODlmYjk0ODIzMmYxZjgyYzczMjE1OTgwNWYyODE2kw8gjg==: 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.178 09:28:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.178 nvme0n1 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:31.178 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2FlNmEwNDQ4MjE5OTFkMDg5ODFlZjI3YzBlYjdmM2F16cH/: 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: ]] 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDM2YzdlMzJlZTYzMDk5ZTk2ZTUxZTgyMzI3MjFiODgP/2y1: 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.179 request: 00:26:31.179 { 00:26:31.179 "name": "nvme0", 00:26:31.179 "dhchap_key": "key2", 00:26:31.179 "dhchap_ctrlr_key": "ckey1", 00:26:31.179 "method": "bdev_nvme_set_keys", 00:26:31.179 "req_id": 1 00:26:31.179 } 00:26:31.179 Got JSON-RPC error response 00:26:31.179 response: 00:26:31.179 { 00:26:31.179 "code": -13, 00:26:31.179 "message": "Permission denied" 00:26:31.179 } 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:31.179 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:31.438 09:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.373 rmmod nvme_tcp 00:26:32.373 rmmod nvme_fabrics 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1244048 ']' 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1244048 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1244048 ']' 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1244048 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1244048 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1244048' 00:26:32.373 killing process with pid 1244048 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1244048 00:26:32.373 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1244048 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.631 09:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:35.166 09:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:37.702 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:37.702 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:38.639 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:38.639 09:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.zLI /tmp/spdk.key-null.ll7 /tmp/spdk.key-sha256.ntQ /tmp/spdk.key-sha384.QJz /tmp/spdk.key-sha512.ohZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:38.639 09:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:41.929 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:41.929 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:41.929 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:41.929 00:26:41.929 real 0m54.228s 00:26:41.929 user 0m48.909s 00:26:41.929 sys 0m12.711s 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.929 ************************************ 00:26:41.929 END TEST nvmf_auth_host 00:26:41.929 ************************************ 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.929 ************************************ 00:26:41.929 START TEST nvmf_digest 00:26:41.929 ************************************ 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:41.929 * Looking for test storage... 00:26:41.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.929 --rc genhtml_branch_coverage=1 00:26:41.929 --rc genhtml_function_coverage=1 00:26:41.929 --rc genhtml_legend=1 00:26:41.929 --rc geninfo_all_blocks=1 00:26:41.929 --rc geninfo_unexecuted_blocks=1 00:26:41.929 00:26:41.929 ' 00:26:41.929 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:41.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.929 --rc genhtml_branch_coverage=1 00:26:41.929 --rc genhtml_function_coverage=1 00:26:41.930 --rc genhtml_legend=1 00:26:41.930 --rc geninfo_all_blocks=1 00:26:41.930 --rc geninfo_unexecuted_blocks=1 00:26:41.930 00:26:41.930 ' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.930 --rc genhtml_branch_coverage=1 00:26:41.930 --rc genhtml_function_coverage=1 00:26:41.930 --rc genhtml_legend=1 00:26:41.930 --rc geninfo_all_blocks=1 00:26:41.930 --rc geninfo_unexecuted_blocks=1 00:26:41.930 00:26:41.930 ' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.930 --rc genhtml_branch_coverage=1 00:26:41.930 --rc genhtml_function_coverage=1 00:26:41.930 --rc genhtml_legend=1 00:26:41.930 --rc geninfo_all_blocks=1 00:26:41.930 --rc geninfo_unexecuted_blocks=1 00:26:41.930 00:26:41.930 ' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.930 09:28:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:48.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:48.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:48.503 Found net devices under 0000:86:00.0: cvl_0_0 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:48.503 Found net devices under 0000:86:00.1: cvl_0_1 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.503 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:26:48.504 00:26:48.504 --- 10.0.0.2 ping statistics --- 00:26:48.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.504 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:26:48.504 00:26:48.504 --- 10.0.0.1 ping statistics --- 00:26:48.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.504 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 ************************************ 00:26:48.504 START TEST nvmf_digest_clean 00:26:48.504 ************************************ 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1257807 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1257807 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1257807 ']' 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 [2024-11-19 09:28:48.774378] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:26:48.504 [2024-11-19 09:28:48.774427] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.504 [2024-11-19 09:28:48.855088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.504 [2024-11-19 09:28:48.895448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.504 [2024-11-19 09:28:48.895484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.504 [2024-11-19 09:28:48.895494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.504 [2024-11-19 09:28:48.895501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.504 [2024-11-19 09:28:48.895508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.504 [2024-11-19 09:28:48.896089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.504 09:28:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 null0 00:26:48.504 [2024-11-19 09:28:49.061159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.504 [2024-11-19 09:28:49.085368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1257836 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1257836 /var/tmp/bperf.sock 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1257836 ']' 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:48.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:48.504 [2024-11-19 09:28:49.138332] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:26:48.504 [2024-11-19 09:28:49.138373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257836 ] 00:26:48.504 [2024-11-19 09:28:49.213605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.504 [2024-11-19 09:28:49.256417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:48.504 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:48.505 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:48.505 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:48.505 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:48.505 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.505 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.765 nvme0n1 00:26:48.765 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.765 09:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:49.024 Running I/O for 2 seconds... 00:26:50.899 24961.00 IOPS, 97.50 MiB/s [2024-11-19T08:28:51.958Z] 25266.50 IOPS, 98.70 MiB/s 00:26:50.899 Latency(us) 00:26:50.900 [2024-11-19T08:28:51.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.900 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:50.900 nvme0n1 : 2.01 25273.01 98.72 0.00 0.00 5060.13 2664.18 11511.54 00:26:50.900 [2024-11-19T08:28:51.959Z] =================================================================================================================== 00:26:50.900 [2024-11-19T08:28:51.959Z] Total : 25273.01 98.72 0.00 0.00 5060.13 2664.18 11511.54 00:26:50.900 { 00:26:50.900 "results": [ 00:26:50.900 { 00:26:50.900 "job": "nvme0n1", 00:26:50.900 "core_mask": "0x2", 00:26:50.900 "workload": "randread", 00:26:50.900 "status": "finished", 00:26:50.900 "queue_depth": 128, 00:26:50.900 "io_size": 4096, 00:26:50.900 "runtime": 2.00819, 00:26:50.900 "iops": 25273.007036186817, 00:26:50.900 "mibps": 98.72268373510475, 00:26:50.900 "io_failed": 0, 00:26:50.900 "io_timeout": 0, 00:26:50.900 "avg_latency_us": 5060.125932191629, 00:26:50.900 "min_latency_us": 2664.1808695652176, 00:26:50.900 "max_latency_us": 11511.540869565217 00:26:50.900 } 00:26:50.900 ], 00:26:50.900 "core_count": 1 00:26:50.900 } 00:26:50.900 09:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:50.900 09:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:50.900 09:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:50.900 09:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:50.900 | select(.opcode=="crc32c") 00:26:50.900 | "\(.module_name) \(.executed)"' 00:26:50.900 09:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1257836 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1257836 ']' 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1257836 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1257836 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1257836' 00:26:51.159 killing process with pid 1257836 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1257836 00:26:51.159 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.159 00:26:51.159 Latency(us) 00:26:51.159 [2024-11-19T08:28:52.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.159 [2024-11-19T08:28:52.218Z] =================================================================================================================== 00:26:51.159 [2024-11-19T08:28:52.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.159 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1257836 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1258306 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1258306 /var/tmp/bperf.sock 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1258306 ']' 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:51.418 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.418 [2024-11-19 09:28:52.374082] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:26:51.418 [2024-11-19 09:28:52.374130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258306 ] 00:26:51.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.418 Zero copy mechanism will not be used. 00:26:51.418 [2024-11-19 09:28:52.450121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.677 [2024-11-19 09:28:52.493663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.677 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:51.677 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:51.677 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:51.677 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:51.677 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:51.934 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.934 09:28:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:52.192 nvme0n1 00:26:52.192 09:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:52.192 09:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:52.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:52.192 Zero copy mechanism will not be used. 00:26:52.192 Running I/O for 2 seconds... 00:26:54.498 5724.00 IOPS, 715.50 MiB/s [2024-11-19T08:28:55.557Z] 5460.00 IOPS, 682.50 MiB/s 00:26:54.498 Latency(us) 00:26:54.498 [2024-11-19T08:28:55.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.498 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:54.498 nvme0n1 : 2.00 5461.76 682.72 0.00 0.00 2926.77 655.36 8149.26 00:26:54.498 [2024-11-19T08:28:55.557Z] =================================================================================================================== 00:26:54.498 [2024-11-19T08:28:55.557Z] Total : 5461.76 682.72 0.00 0.00 2926.77 655.36 8149.26 00:26:54.498 { 00:26:54.498 "results": [ 00:26:54.498 { 00:26:54.498 "job": "nvme0n1", 00:26:54.498 "core_mask": "0x2", 00:26:54.498 "workload": "randread", 00:26:54.498 "status": "finished", 00:26:54.498 "queue_depth": 16, 00:26:54.498 "io_size": 131072, 00:26:54.498 "runtime": 2.002285, 00:26:54.498 "iops": 5461.759939269385, 00:26:54.498 "mibps": 682.7199924086731, 00:26:54.498 "io_failed": 0, 00:26:54.498 "io_timeout": 0, 00:26:54.498 "avg_latency_us": 2926.773636970834, 00:26:54.498 "min_latency_us": 655.36, 00:26:54.498 "max_latency_us": 8149.2591304347825 00:26:54.498 } 00:26:54.498 ], 00:26:54.498 "core_count": 1 00:26:54.498 } 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:54.498 | select(.opcode=="crc32c") 00:26:54.498 | "\(.module_name) \(.executed)"' 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1258306 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1258306 ']' 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1258306 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1258306 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1258306' 00:26:54.498 killing process with pid 1258306 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1258306 00:26:54.498 Received shutdown signal, test time was about 2.000000 seconds 00:26:54.498 00:26:54.498 Latency(us) 00:26:54.498 [2024-11-19T08:28:55.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.498 [2024-11-19T08:28:55.557Z] =================================================================================================================== 00:26:54.498 [2024-11-19T08:28:55.557Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.498 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1258306 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1258846 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1258846 /var/tmp/bperf.sock 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1258846 ']' 00:26:54.756 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:54.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:54.757 [2024-11-19 09:28:55.639998] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:26:54.757 [2024-11-19 09:28:55.640054] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258846 ] 00:26:54.757 [2024-11-19 09:28:55.715552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.757 [2024-11-19 09:28:55.755662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:54.757 09:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:55.323 09:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.323 09:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.323 nvme0n1 00:26:55.581 09:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:55.581 09:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.581 Running I/O for 2 seconds... 00:26:57.515 27776.00 IOPS, 108.50 MiB/s [2024-11-19T08:28:58.574Z] 27812.00 IOPS, 108.64 MiB/s 00:26:57.515 Latency(us) 00:26:57.515 [2024-11-19T08:28:58.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.515 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:57.515 nvme0n1 : 2.01 27789.64 108.55 0.00 0.00 4602.22 2222.53 9516.97 00:26:57.515 [2024-11-19T08:28:58.574Z] =================================================================================================================== 00:26:57.515 [2024-11-19T08:28:58.574Z] Total : 27789.64 108.55 0.00 0.00 4602.22 2222.53 9516.97 00:26:57.515 { 00:26:57.515 "results": [ 00:26:57.515 { 00:26:57.515 "job": "nvme0n1", 00:26:57.515 "core_mask": "0x2", 00:26:57.515 "workload": "randwrite", 00:26:57.515 "status": "finished", 00:26:57.515 "queue_depth": 128, 00:26:57.515 "io_size": 4096, 00:26:57.515 "runtime": 2.007439, 00:26:57.515 "iops": 27789.636447234512, 00:26:57.515 "mibps": 108.55326737200981, 00:26:57.515 "io_failed": 0, 00:26:57.515 "io_timeout": 0, 00:26:57.515 "avg_latency_us": 4602.218223366, 00:26:57.515 "min_latency_us": 2222.5252173913045, 00:26:57.515 "max_latency_us": 9516.96695652174 00:26:57.515 } 00:26:57.515 ], 00:26:57.515 "core_count": 1 00:26:57.515 } 00:26:57.515 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:57.515 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:57.515 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:57.515 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:57.515 | select(.opcode=="crc32c") 00:26:57.515 | "\(.module_name) \(.executed)"' 00:26:57.515 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:57.824 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1258846 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1258846 ']' 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1258846 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1258846 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1258846' 00:26:57.825 killing process with pid 1258846 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1258846 00:26:57.825 Received shutdown signal, test time was about 2.000000 seconds 00:26:57.825 00:26:57.825 Latency(us) 00:26:57.825 [2024-11-19T08:28:58.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.825 [2024-11-19T08:28:58.884Z] =================================================================================================================== 00:26:57.825 [2024-11-19T08:28:58.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:57.825 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1258846 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1259468 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1259468 /var/tmp/bperf.sock 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1259468 ']' 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:58.083 09:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:58.083 [2024-11-19 09:28:58.990901] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:26:58.083 [2024-11-19 09:28:58.990954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1259468 ] 00:26:58.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.084 Zero copy mechanism will not be used. 00:26:58.084 [2024-11-19 09:28:59.065091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.084 [2024-11-19 09:28:59.105881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.340 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:58.340 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:58.340 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:58.340 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:58.340 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:58.597 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.597 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:58.855 nvme0n1 00:26:58.855 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:58.855 09:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:59.113 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.113 Zero copy mechanism will not be used. 00:26:59.113 Running I/O for 2 seconds... 00:27:00.983 6111.00 IOPS, 763.88 MiB/s [2024-11-19T08:29:02.042Z] 6256.00 IOPS, 782.00 MiB/s 00:27:00.983 Latency(us) 00:27:00.983 [2024-11-19T08:29:02.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.983 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.983 nvme0n1 : 2.00 6252.05 781.51 0.00 0.00 2554.91 1894.85 11967.44 00:27:00.983 [2024-11-19T08:29:02.042Z] =================================================================================================================== 00:27:00.983 [2024-11-19T08:29:02.042Z] Total : 6252.05 781.51 0.00 0.00 2554.91 1894.85 11967.44 00:27:00.983 { 00:27:00.983 "results": [ 00:27:00.983 { 00:27:00.983 "job": "nvme0n1", 00:27:00.983 "core_mask": "0x2", 00:27:00.983 "workload": "randwrite", 00:27:00.983 "status": "finished", 00:27:00.983 "queue_depth": 16, 00:27:00.983 "io_size": 131072, 00:27:00.983 "runtime": 2.003984, 00:27:00.983 "iops": 6252.045924518359, 00:27:00.983 "mibps": 781.5057405647949, 00:27:00.983 "io_failed": 0, 00:27:00.983 "io_timeout": 0, 00:27:00.983 "avg_latency_us": 2554.9083391228, 00:27:00.983 "min_latency_us": 1894.8452173913045, 00:27:00.983 "max_latency_us": 11967.44347826087 00:27:00.983 } 00:27:00.983 ], 00:27:00.983 "core_count": 1 00:27:00.983 } 00:27:00.983 09:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:00.983 09:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:00.983 09:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:00.983 | select(.opcode=="crc32c") 00:27:00.983 | "\(.module_name) \(.executed)"' 00:27:00.983 09:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:00.983 09:29:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1259468 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1259468 ']' 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1259468 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1259468 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1259468' 00:27:01.241 killing process with pid 1259468 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1259468 00:27:01.241 Received shutdown signal, test time was about 2.000000 seconds 00:27:01.241 00:27:01.241 Latency(us) 00:27:01.241 [2024-11-19T08:29:02.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.241 [2024-11-19T08:29:02.300Z] =================================================================================================================== 00:27:01.241 [2024-11-19T08:29:02.300Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:01.241 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1259468 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1257807 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1257807 ']' 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1257807 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1257807 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1257807' 00:27:01.500 killing process with pid 1257807 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1257807 00:27:01.500 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1257807 00:27:01.759 00:27:01.759 real 0m13.855s 00:27:01.759 user 0m26.615s 00:27:01.759 sys 0m4.477s 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.759 ************************************ 00:27:01.759 END TEST nvmf_digest_clean 00:27:01.759 ************************************ 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:01.759 ************************************ 00:27:01.759 START TEST nvmf_digest_error 00:27:01.759 ************************************ 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1259982 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1259982 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1259982 ']' 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:01.759 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.760 [2024-11-19 09:29:02.699816] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:01.760 [2024-11-19 09:29:02.699860] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.760 [2024-11-19 09:29:02.777013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.018 [2024-11-19 09:29:02.818058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.018 [2024-11-19 09:29:02.818093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.018 [2024-11-19 09:29:02.818101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.018 [2024-11-19 09:29:02.818107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.018 [2024-11-19 09:29:02.818112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.018 [2024-11-19 09:29:02.818684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.018 [2024-11-19 09:29:02.891130] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.018 09:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.018 null0 00:27:02.018 [2024-11-19 09:29:02.981783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.018 [2024-11-19 09:29:03.005997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1260182 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1260182 /var/tmp/bperf.sock 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1260182 ']' 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:02.018 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.018 [2024-11-19 09:29:03.062646] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:02.018 [2024-11-19 09:29:03.062690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260182 ] 00:27:02.277 [2024-11-19 09:29:03.137384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.277 [2024-11-19 09:29:03.180304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.277 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:02.277 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:02.277 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.277 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:02.535 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:02.535 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.536 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.536 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.536 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.536 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:02.794 nvme0n1 00:27:02.794 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:02.794 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.794 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.794 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.794 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:02.794 09:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.053 Running I/O for 2 seconds... 00:27:03.053 [2024-11-19 09:29:03.897568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.897604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.897614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.910339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.910366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.910376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.921534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.921555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.921564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.930235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.930262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.930270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.943220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.943245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.943254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.953181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.953202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.953211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.963005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.963027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.963035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.974160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.974181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.974189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.985334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.985355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.985364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:03.993525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:03.993546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:03.993554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.003980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.004001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:04.004010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.014380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.014401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:04.014409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.025286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.025307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:04.025315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.034381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.034402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:04.034409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.046320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:04.046348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.058894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.058915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.053 [2024-11-19 09:29:04.058923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.053 [2024-11-19 09:29:04.070208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.053 [2024-11-19 09:29:04.070231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.054 [2024-11-19 09:29:04.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.054 [2024-11-19 09:29:04.081413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.054 [2024-11-19 09:29:04.081435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.054 [2024-11-19 09:29:04.081444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.054 [2024-11-19 09:29:04.091412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.054 [2024-11-19 09:29:04.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.054 [2024-11-19 09:29:04.091441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.054 [2024-11-19 09:29:04.099616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.054 [2024-11-19 09:29:04.099637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.054 [2024-11-19 09:29:04.099645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.110491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.110518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.110528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.120537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.120562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.120575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.130630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.130653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.130662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.140524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.140554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.152370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.152391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.152399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.162184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.162205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.162213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.171571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.171592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.171600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.180935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.180964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.180972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.190166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.190187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.190195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.199853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.199874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.199883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.208839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.208863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.208872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.220913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.220933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.220943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.230242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.230262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.230271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.239556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.239580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.239590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.248720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.248741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.248750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.258203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.313 [2024-11-19 09:29:04.258224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.313 [2024-11-19 09:29:04.258235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.313 [2024-11-19 09:29:04.268515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.268535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.268544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.277901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.277922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.277932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.286907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.286928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.286938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.296953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.296972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.296981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.306639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.306660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.306670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.316084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.316105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.316115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.325593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.325614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.325623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.336124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.336144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.336153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.346713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.346733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.346743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.357358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.357378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.357387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.314 [2024-11-19 09:29:04.365872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.314 [2024-11-19 09:29:04.365896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.314 [2024-11-19 09:29:04.365907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.573 [2024-11-19 09:29:04.377710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.377732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.377745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.389534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.389558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.389567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.401986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.402009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.402018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.412398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.412418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.412426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.420429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.420451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.420459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.431529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.431550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.431558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.440375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.440404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.450724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.450745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.450754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.459606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.459628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.459636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.469338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.469362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.469371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.478784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.478805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.478814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.488730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.488758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.488766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.498235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.498256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.498265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.508954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.508976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.508984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.517265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.517286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.517294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.528737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.528758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.528767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.541547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.541569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.541577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.552945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.552972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.552981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.561532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.561553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.571254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.571275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.571284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.580219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.580239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.580248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.590796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.590816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.590825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.601814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.601833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.601841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.574 [2024-11-19 09:29:04.615600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.574 [2024-11-19 09:29:04.615620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.574 [2024-11-19 09:29:04.615629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.575 [2024-11-19 09:29:04.624039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.575 [2024-11-19 09:29:04.624062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.575 [2024-11-19 09:29:04.624071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.834 [2024-11-19 09:29:04.635996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.834 [2024-11-19 09:29:04.636021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.834 [2024-11-19 09:29:04.636030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.834 [2024-11-19 09:29:04.648343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.834 [2024-11-19 09:29:04.648366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.834 [2024-11-19 09:29:04.648379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.834 [2024-11-19 09:29:04.658019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.834 [2024-11-19 09:29:04.658040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.834 [2024-11-19 09:29:04.658048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.834 [2024-11-19 09:29:04.666980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.834 [2024-11-19 09:29:04.667001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.834 [2024-11-19 09:29:04.667010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.834 [2024-11-19 09:29:04.676328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.834 [2024-11-19 09:29:04.676349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.834 [2024-11-19 09:29:04.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.834 [2024-11-19 09:29:04.685685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.834 [2024-11-19 09:29:04.685706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.685715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.694778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.694798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.694806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.704988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.705009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.705018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.714438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.714458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.714467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.725568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.725589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.725597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.735246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.735267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.735275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.743655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.743677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.743686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.753106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.753127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.753136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.762657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.762679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.762689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.773386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.773407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.773416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.781212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.781233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.781241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.791966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.791987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.791995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.802264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.802285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.802293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.813250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.813271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.813282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.821101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.821121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.821130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.830747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.830767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.830776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.840060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.840081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.840089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.849747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.849768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.849780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.858798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.858820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.858828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.870513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.870543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.835 [2024-11-19 09:29:04.882588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:03.835 [2024-11-19 09:29:04.882608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.835 [2024-11-19 09:29:04.882616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 25028.00 IOPS, 97.77 MiB/s [2024-11-19T08:29:05.153Z] [2024-11-19 09:29:04.892383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.892408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.892417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.900994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.901022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.901031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.910316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.910338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.910347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.920559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.920580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.920589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.929976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.929997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.930005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.940158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.940178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.940186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.949160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.949181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.949189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.958193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.958214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.958223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.969728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.969749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.969757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.094 [2024-11-19 09:29:04.978357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.094 [2024-11-19 09:29:04.978378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.094 [2024-11-19 09:29:04.978386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:04.989754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:04.989775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:04.989784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:04.997449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:04.997470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:04.997478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.008472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.008492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.008500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.019823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.019843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.019851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.032820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.032842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.032850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.044994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.045015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.045023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.056473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.065045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.065066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.065074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.076881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.076904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.076915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.086330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.086352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.086361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.098065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.098086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.098095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.106791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.106811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.106820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.117532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.117553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.117561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.127745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.127765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.127774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.095 [2024-11-19 09:29:05.136472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.095 [2024-11-19 09:29:05.136493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.095 [2024-11-19 09:29:05.136501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.149072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.149097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.149107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.161932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.161962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.161972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.174536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.174559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.174568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.185938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.185966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.185975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.194477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.194499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.194507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.204342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.204363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.204372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.213309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.213329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.213337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.222908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.222928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.222937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.232788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.232810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.232818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.243941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.243966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.243974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.256962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.256984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.256996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.267713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.267734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.267743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.279713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.279734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.279743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.289535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.289556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.289564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.299161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.299182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.299191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.307111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.307132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.307141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.318367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.354 [2024-11-19 09:29:05.318398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.354 [2024-11-19 09:29:05.318407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.354 [2024-11-19 09:29:05.327095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.327117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.327125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.337946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.337974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.337983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.346563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.346588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.346596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.358048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.358070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.358078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.370806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.370827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.370835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.383776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.383797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.383806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.396345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.396366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.396374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.355 [2024-11-19 09:29:05.407799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.355 [2024-11-19 09:29:05.407823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.355 [2024-11-19 09:29:05.407832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.416923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.416955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.416965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.429845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.429867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.429876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.441005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.441025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.441033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.449698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.449720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.449728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.462633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.462655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.462663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.474087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.474108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.474117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.482701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.482722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.482730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.493686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.493706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.493715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.504136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.504158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.504166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.512648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.512679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.522530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.522550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.522558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.533617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.533639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.533651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.542299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.542322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.542331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.552653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.552675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.552683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.564883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.564905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.564914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.574010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.574032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.574041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.584952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.584973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.584982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.598210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.598232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.598241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.609303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.609324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.609332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.618054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.618075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.618083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.629173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.629201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.629210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.640350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.640372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.640380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.649253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.649275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.614 [2024-11-19 09:29:05.660535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.614 [2024-11-19 09:29:05.660556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.614 [2024-11-19 09:29:05.660564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 09:29:05.672226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.873 [2024-11-19 09:29:05.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.873 [2024-11-19 09:29:05.672261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.873 [2024-11-19 09:29:05.682312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.873 [2024-11-19 09:29:05.682335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.682344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.690994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.691016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.691024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.702414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.702435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.702444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.714114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.714137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.714145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.725507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.725530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.733186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.733207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.733215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.743693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.743715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.743723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.754061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.754083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.754091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.762827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.762848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.762856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.773526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.773547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.773555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.784523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.784545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.784553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.793118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.793139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.793148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.803244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.803265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.803276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.814447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.814468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.814476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.823945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.823971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.823979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.832332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.832354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.832362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.842221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.842243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.842251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.851560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.851582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.851590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.862009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.862030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.862038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.873792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.873813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.873821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.874 [2024-11-19 09:29:05.882317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x803370) 00:27:04.874 [2024-11-19 09:29:05.882338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.874 [2024-11-19 09:29:05.882346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.132 24748.50 IOPS, 96.67 MiB/s 00:27:05.132 Latency(us) 00:27:05.132 [2024-11-19T08:29:06.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.132 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:05.132 nvme0n1 : 2.04 24270.13 94.81 0.00 0.00 5165.30 2578.70 49921.34 00:27:05.132 [2024-11-19T08:29:06.191Z] =================================================================================================================== 00:27:05.132 [2024-11-19T08:29:06.191Z] Total : 24270.13 94.81 0.00 0.00 5165.30 2578.70 49921.34 00:27:05.132 { 00:27:05.132 "results": [ 00:27:05.132 { 00:27:05.132 "job": "nvme0n1", 00:27:05.132 "core_mask": "0x2", 00:27:05.132 "workload": "randread", 00:27:05.132 "status": "finished", 00:27:05.132 "queue_depth": 128, 00:27:05.132 "io_size": 4096, 00:27:05.132 "runtime": 2.043994, 00:27:05.132 "iops": 24270.129951457784, 00:27:05.132 "mibps": 94.80519512288197, 00:27:05.132 "io_failed": 0, 00:27:05.132 "io_timeout": 0, 00:27:05.132 "avg_latency_us": 5165.302323468164, 00:27:05.132 "min_latency_us": 2578.6991304347825, 00:27:05.132 "max_latency_us": 49921.33565217391 00:27:05.132 } 00:27:05.132 ], 00:27:05.132 "core_count": 1 00:27:05.132 } 00:27:05.132 09:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:05.132 09:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:05.132 09:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:05.132 | .driver_specific 00:27:05.132 | .nvme_error 00:27:05.132 | .status_code 00:27:05.132 | .command_transient_transport_error' 00:27:05.132 09:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1260182 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1260182 ']' 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1260182 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:05.132 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1260182 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1260182' 00:27:05.390 killing process with pid 1260182 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1260182 00:27:05.390 Received shutdown signal, test time was about 2.000000 seconds 00:27:05.390 00:27:05.390 Latency(us) 00:27:05.390 [2024-11-19T08:29:06.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.390 [2024-11-19T08:29:06.449Z] =================================================================================================================== 00:27:05.390 [2024-11-19T08:29:06.449Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1260182 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1260693 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1260693 /var/tmp/bperf.sock 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1260693 ']' 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:05.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:05.390 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.390 [2024-11-19 09:29:06.412941] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:05.390 [2024-11-19 09:29:06.412998] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260693 ] 00:27:05.390 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:05.390 Zero copy mechanism will not be used. 00:27:05.648 [2024-11-19 09:29:06.486207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.648 [2024-11-19 09:29:06.524347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.648 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:05.648 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:05.648 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:05.648 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:05.906 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:05.906 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.906 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:05.906 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.906 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.906 09:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.164 nvme0n1 00:27:06.164 09:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:06.164 09:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.164 09:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:06.164 09:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.164 09:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:06.164 09:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.423 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.423 Zero copy mechanism will not be used. 00:27:06.423 Running I/O for 2 seconds... 00:27:06.423 [2024-11-19 09:29:07.235920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.235964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.235977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.241213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.241243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.241252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.246243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.246266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.246275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.251296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.251319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.251327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.256442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.256464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.256472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.261663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.261686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.261694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.266883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.266906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.266914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.272011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.272032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.272045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.277153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.277175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.277183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.282344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.282365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.282373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.287633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.287655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.287663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.292997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.293018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.298249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.298270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.298278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.303600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.303622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.303630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.308954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.308975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.308983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.314219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.314240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.423 [2024-11-19 09:29:07.314249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.423 [2024-11-19 09:29:07.319563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.423 [2024-11-19 09:29:07.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.319597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.324764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.324785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.324793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.330080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.330101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.330109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.335286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.335308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.335316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.340507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.340528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.340536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.345827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.345848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.345857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.351100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.351122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.351131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.356273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.356295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.356303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.361432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.361455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.361463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.366777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.366799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.366807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.372083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.372106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.372114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.377363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.377385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.377393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.382545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.382566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.382574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.387818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.387840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.387848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.393065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.393087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.393095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.398307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.398328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.398336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.403586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.403607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.403615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.408877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.408898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.408910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.414150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.414170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.414179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.419463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.419485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.419493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.424752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.424773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.424781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.429985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.430006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.430014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.435188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.435209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.435217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.440450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.440472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.440480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.445659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.445680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.445688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.450910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.450931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.450940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.456149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.424 [2024-11-19 09:29:07.456170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.424 [2024-11-19 09:29:07.456177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.424 [2024-11-19 09:29:07.461447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.425 [2024-11-19 09:29:07.461474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.425 [2024-11-19 09:29:07.461482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.425 [2024-11-19 09:29:07.466957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.425 [2024-11-19 09:29:07.466979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.425 [2024-11-19 09:29:07.466987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.425 [2024-11-19 09:29:07.472212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.425 [2024-11-19 09:29:07.472233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.425 [2024-11-19 09:29:07.472242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.683 [2024-11-19 09:29:07.477703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.683 [2024-11-19 09:29:07.477727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.683 [2024-11-19 09:29:07.477738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.683 [2024-11-19 09:29:07.483055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.483079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.483088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.488521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.488543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.488552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.493960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.493983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.493992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.499230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.499251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.499263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.504657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.504679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.504688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.510071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.510094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.510103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.515504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.515526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.515534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.520860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.520882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.520890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.526171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.526203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.526212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.531528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.531551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.531560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.537010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.537032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.537041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.542404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.542425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.542433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.547747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.547779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.554131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.554153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.554161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.559655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.559676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.559685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.565953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.565975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.565984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.573671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.573692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.573701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.580097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.580118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.587188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.587210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.587219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.593212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.593234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.593243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.598700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.598722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.598730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.603837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.603860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.603869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.684 [2024-11-19 09:29:07.609993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.684 [2024-11-19 09:29:07.610016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.684 [2024-11-19 09:29:07.610026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.616267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.616290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.616298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.624212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.624236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.624245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.631132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.631155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.631164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.637695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.637718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.637727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.644195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.644218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.644227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.652266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.652289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.652297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.659842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.659865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.659878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.666958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.666996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.667005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.673600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.673623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.673632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.681061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.681083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.681091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.688451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.688474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.688483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.697109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.697132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.697141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.705434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.705457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.705466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.713687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.713711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.713720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.721296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.721319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.721328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.728082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.728109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.728117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.685 [2024-11-19 09:29:07.735404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.685 [2024-11-19 09:29:07.735429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.685 [2024-11-19 09:29:07.735439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.944 [2024-11-19 09:29:07.742602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.944 [2024-11-19 09:29:07.742629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.944 [2024-11-19 09:29:07.742639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.944 [2024-11-19 09:29:07.749211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.944 [2024-11-19 09:29:07.749236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.944 [2024-11-19 09:29:07.749245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.944 [2024-11-19 09:29:07.755680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.944 [2024-11-19 09:29:07.755704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.944 [2024-11-19 09:29:07.755713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.944 [2024-11-19 09:29:07.762341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.944 [2024-11-19 09:29:07.762365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.944 [2024-11-19 09:29:07.762373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.944 [2024-11-19 09:29:07.770224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.944 [2024-11-19 09:29:07.770247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.770256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.777929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.777960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.777970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.784744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.784767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.784776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.790895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.790919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.790927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.796460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.796483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.796492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.802363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.802385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.802394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.808486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.808510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.808519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.814601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.814624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.814632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.821754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.821786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.828897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.828920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.828928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.835161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.835184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.835193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.841430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.841453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.841465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.847852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.847875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.847883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.855419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.855441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.855449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.861866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.861889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.861897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.868266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.868289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.868297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.873700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.873722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.873731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.879034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.879057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.879065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.884437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.884459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.884467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.889757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.889779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.889788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.895238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.895264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.895272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.900651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.900673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.900681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.905992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.906014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.906022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.911445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.911467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.911475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.916995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.917017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.917026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.922599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.922620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.945 [2024-11-19 09:29:07.922629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.945 [2024-11-19 09:29:07.928192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.945 [2024-11-19 09:29:07.928215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.928224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.933696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.933718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.933727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.939100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.939123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.944603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.944625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.944633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.948164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.948185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.948193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.952231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.952253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.952262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.956995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.957017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.957025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.962673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.962697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.962706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.968258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.968281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.973794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.973817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.973825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.979193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.979216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.979224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.985056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.985083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.985091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.990626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.990649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.990658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:06.946 [2024-11-19 09:29:07.996047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:06.946 [2024-11-19 09:29:07.996072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.946 [2024-11-19 09:29:07.996081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.001692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.001717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.001727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.006446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.006472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.006481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.011526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.011549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.011558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.016721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.016744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.016753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.021905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.021928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.021937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.027418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.027440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.027449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.032921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.032944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.032958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.038440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.038463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.038471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.043988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.044010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.044019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.049580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.049603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.049611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.055065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.055087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.055095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.060664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.060685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.060694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.066165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.066187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.066196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.071430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.071454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.071462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.076713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.076736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.076748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.082146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.082168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.082176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.087419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.087440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.087449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.092785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.092807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.092816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.098062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.098084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.098092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.103358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.103380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.103388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.108913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.108935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.108944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.114595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.114618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.114627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.120512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.120535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.120544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.206 [2024-11-19 09:29:08.126039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.206 [2024-11-19 09:29:08.126075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.206 [2024-11-19 09:29:08.126084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.131467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.131489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.131498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.136832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.136863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.142638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.142661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.142669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.148990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.149012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.149020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.154506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.154528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.154536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.159851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.159873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.159882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.165439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.165461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.165470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.170895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.170918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.170927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.174514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.174535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.174544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.178531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.178553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.178561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.183589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.183611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.183620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.188779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.188801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.188809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.194122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.194144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.194152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.199480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.199502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.199511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.204718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.204740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.204748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.209979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.210001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.210009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.215406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.215428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.215443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.220721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.220745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.220753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.224309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.224330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.224339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.228341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.228364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.228372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 5446.00 IOPS, 680.75 MiB/s [2024-11-19T08:29:08.266Z] [2024-11-19 09:29:08.234996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.235018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.235026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.239800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.239823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.239832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.245109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.245132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.245140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.250503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.250525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.250534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.207 [2024-11-19 09:29:08.255939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.207 [2024-11-19 09:29:08.255972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.207 [2024-11-19 09:29:08.255982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.261425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.261454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.261464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.266961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.266986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.266995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.272596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.272619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.272628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.277981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.278002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.278011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.283467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.283489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.283498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.289134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.289156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.289164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.296025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.296048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.296057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.303644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.303668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.303677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.311590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.311613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.311622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.319459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.319484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.319493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.327664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.327688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.327697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.335517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.335541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.335550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.343192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.343216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.343225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.351227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.351251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.351260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.359432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.359455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.359464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.367034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.367057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.367065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.374545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.374569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.382142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.382165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.382178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.389818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.389842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.389851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.397407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.397431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.397440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.404581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.404605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.411502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.411525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.411534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.418812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.418836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.468 [2024-11-19 09:29:08.418845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.468 [2024-11-19 09:29:08.426605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.468 [2024-11-19 09:29:08.426629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.426637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.433532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.433557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.433566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.439418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.439440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.439448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.446186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.446209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.446218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.453105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.453130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.453138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.460237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.460261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.460269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.468128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.468152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.468161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.475403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.475425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.475434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.482273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.482295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.482304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.487858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.487879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.487887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.493370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.493392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.493401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.498721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.498744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.498756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.504131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.504154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.504162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.509435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.509458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.509468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.514723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.514746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.514755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.469 [2024-11-19 09:29:08.520132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.469 [2024-11-19 09:29:08.520157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.469 [2024-11-19 09:29:08.520166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.525468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.525495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.525504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.530881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.530907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.530916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.536196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.536220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.536229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.541486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.541509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.541517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.546772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.546800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.546808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.551886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.551907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.551916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.554839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.554862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.554870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.560154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.560178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.560187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.565529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.565552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.565561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.570905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.570927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.570936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.576062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.576084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.576093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.581341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.581365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.581373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.586515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.586537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.586545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.729 [2024-11-19 09:29:08.591904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.729 [2024-11-19 09:29:08.591926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.729 [2024-11-19 09:29:08.591934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.596909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.596932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.596940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.602192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.602215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.602223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.607508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.607531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.607539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.612883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.612907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.612917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.618202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.618225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.618233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.623252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.623275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.623282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.628627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.628657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.633845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.633867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.633880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.639140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.639162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.639170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.644710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.644734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.644743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.650401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.650424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.650432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.655625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.655648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.655656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.661088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.661111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.661119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.666507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.666529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.666538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.671857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.671879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.671887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.677378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.677400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.677408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.682699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.682725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.682733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.688056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.688079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.688090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.691977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.691999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.692007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.698289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.698313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.698321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.705609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.705641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.712322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.712346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.712354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.719615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.719639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.719647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.725986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.726009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.730 [2024-11-19 09:29:08.726018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.730 [2024-11-19 09:29:08.731447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.730 [2024-11-19 09:29:08.731470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.731478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.736844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.736866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.736874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.742252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.742274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.742283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.747626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.747648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.747656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.752974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.752997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.753005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.758271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.758293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.758301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.763912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.763935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.763944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.769302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.769324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.769333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.774699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.774721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.774729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.731 [2024-11-19 09:29:08.780029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.731 [2024-11-19 09:29:08.780053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.731 [2024-11-19 09:29:08.780067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.785389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.785414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.785423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.790840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.790866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.790875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.796079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.796102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.796111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.801369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.801393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.801401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.806581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.806605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.806613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.811866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.811888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.811897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.817142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.817164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.817173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.822374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.822396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.822404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.827668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.827694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.827703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.833002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.833025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.833033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.838575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.838598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.838606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.845269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.845292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.845301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.852358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.852381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.852390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.860688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.860711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.860720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.867571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.867595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.867603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.874022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.874044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.874053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.879434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.879456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.879464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.884416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.884439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.884447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.889610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.889632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.889640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.895700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.895723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.895731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.902999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.903022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.903030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.909757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.909780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.909788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.916103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.916126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.916134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.922362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.922383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.922392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.991 [2024-11-19 09:29:08.928543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.991 [2024-11-19 09:29:08.928567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.991 [2024-11-19 09:29:08.928575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.934315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.934337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.934349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.941033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.941056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.941065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.948163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.948186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.948195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.954323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.954346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.954354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.960422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.960444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.960453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.966617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.966640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.966648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.972631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.972653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.972661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.975934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.975961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.975970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.982291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.982313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.982322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.988051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.988072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.988080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.993178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.993201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.993209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:08.998513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:08.998535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:08.998544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:09.005166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:09.005187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:09.005196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:09.012798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:09.012820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:09.012829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:09.020370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:09.020394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:09.020403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:09.027672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:09.027695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:09.027704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:09.033292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:09.033314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:09.033322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:07.992 [2024-11-19 09:29:09.038681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:07.992 [2024-11-19 09:29:09.038703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:07.992 [2024-11-19 09:29:09.038715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.258 [2024-11-19 09:29:09.044323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.258 [2024-11-19 09:29:09.044363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.258 [2024-11-19 09:29:09.044378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.258 [2024-11-19 09:29:09.050213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.258 [2024-11-19 09:29:09.050238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.258 [2024-11-19 09:29:09.050249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.258 [2024-11-19 09:29:09.057641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.057665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.057674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.065525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.065549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.065558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.072623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.072646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.072655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.078655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.078677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.078686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.084752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.084775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.084784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.090105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.090127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.090136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.095358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.095385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.095392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.100634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.259 [2024-11-19 09:29:09.100656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.259 [2024-11-19 09:29:09.100664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.259 [2024-11-19 09:29:09.106188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.106210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-19 09:29:09.106218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.260 [2024-11-19 09:29:09.112335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.112357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-19 09:29:09.112365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.260 [2024-11-19 09:29:09.118588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.118611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-19 09:29:09.118620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.260 [2024-11-19 09:29:09.124910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.124934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-19 09:29:09.124943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.260 [2024-11-19 09:29:09.131409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.131431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-19 09:29:09.131439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.260 [2024-11-19 09:29:09.137745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.137767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.260 [2024-11-19 09:29:09.137775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.260 [2024-11-19 09:29:09.143817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.260 [2024-11-19 09:29:09.143839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.143847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.150127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.150149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.150158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.156373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.156394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.156403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.162562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.162585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.162593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.169140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.169163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.169172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.175093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.175114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.175123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.180385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.180406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.180415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.185627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.185649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.261 [2024-11-19 09:29:09.185657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.261 [2024-11-19 09:29:09.190872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.261 [2024-11-19 09:29:09.190892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.262 [2024-11-19 09:29:09.190900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.262 [2024-11-19 09:29:09.195672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.262 [2024-11-19 09:29:09.195694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.262 [2024-11-19 09:29:09.195707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.262 [2024-11-19 09:29:09.201037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.262 [2024-11-19 09:29:09.201059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.262 [2024-11-19 09:29:09.201067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.262 [2024-11-19 09:29:09.206219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.262 [2024-11-19 09:29:09.206241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.262 [2024-11-19 09:29:09.206250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.262 [2024-11-19 09:29:09.211491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.262 [2024-11-19 09:29:09.211513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.262 [2024-11-19 09:29:09.211521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.262 [2024-11-19 09:29:09.216704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.263 [2024-11-19 09:29:09.216726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.263 [2024-11-19 09:29:09.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.263 [2024-11-19 09:29:09.221660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.263 [2024-11-19 09:29:09.221682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.263 [2024-11-19 09:29:09.221690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.263 [2024-11-19 09:29:09.226917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.263 [2024-11-19 09:29:09.226939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.263 [2024-11-19 09:29:09.226953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.263 [2024-11-19 09:29:09.232252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd91570) 00:27:08.263 [2024-11-19 09:29:09.232277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.263 [2024-11-19 09:29:09.232286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.263 5321.00 IOPS, 665.12 MiB/s 00:27:08.263 Latency(us) 00:27:08.263 [2024-11-19T08:29:09.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.263 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:08.263 nvme0n1 : 2.00 5323.79 665.47 0.00 0.00 3002.67 648.24 8548.17 00:27:08.263 [2024-11-19T08:29:09.322Z] =================================================================================================================== 00:27:08.263 [2024-11-19T08:29:09.322Z] Total : 5323.79 665.47 0.00 0.00 3002.67 648.24 8548.17 00:27:08.263 { 00:27:08.263 "results": [ 00:27:08.263 { 00:27:08.263 "job": "nvme0n1", 00:27:08.263 "core_mask": "0x2", 00:27:08.263 "workload": "randread", 00:27:08.263 "status": "finished", 00:27:08.263 "queue_depth": 16, 00:27:08.263 "io_size": 131072, 00:27:08.263 "runtime": 2.001958, 00:27:08.263 "iops": 5323.788011536705, 00:27:08.263 "mibps": 665.4735014420881, 00:27:08.263 "io_failed": 0, 00:27:08.263 "io_timeout": 0, 00:27:08.264 "avg_latency_us": 3002.665557939739, 00:27:08.264 "min_latency_us": 648.2365217391305, 00:27:08.264 "max_latency_us": 8548.173913043478 00:27:08.264 } 00:27:08.264 ], 00:27:08.264 "core_count": 1 00:27:08.264 } 00:27:08.264 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:08.264 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:08.264 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:08.264 | .driver_specific 00:27:08.264 | .nvme_error 00:27:08.264 | .status_code 00:27:08.264 | .command_transient_transport_error' 00:27:08.264 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1260693 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1260693 ']' 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1260693 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1260693 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1260693' 00:27:08.524 killing process with pid 1260693 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1260693 00:27:08.524 Received shutdown signal, test time was about 2.000000 seconds 00:27:08.524 00:27:08.524 Latency(us) 00:27:08.524 [2024-11-19T08:29:09.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.524 [2024-11-19T08:29:09.583Z] =================================================================================================================== 00:27:08.524 [2024-11-19T08:29:09.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:08.524 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1260693 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1261164 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1261164 /var/tmp/bperf.sock 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1261164 ']' 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:08.783 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.783 [2024-11-19 09:29:09.713238] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:08.783 [2024-11-19 09:29:09.713287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261164 ] 00:27:08.783 [2024-11-19 09:29:09.787133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.783 [2024-11-19 09:29:09.825049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.041 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:09.041 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:09.041 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.041 09:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:09.299 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:09.299 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.299 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.299 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.299 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.299 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.557 nvme0n1 00:27:09.557 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:09.557 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.557 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.557 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.557 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:09.557 09:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:09.557 Running I/O for 2 seconds... 00:27:09.557 [2024-11-19 09:29:10.535976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ee5c8 00:27:09.557 [2024-11-19 09:29:10.536790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.557 [2024-11-19 09:29:10.536826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:09.557 [2024-11-19 09:29:10.548070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f9f68 00:27:09.557 [2024-11-19 09:29:10.549638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.557 [2024-11-19 09:29:10.549664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:09.557 [2024-11-19 09:29:10.554999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fb8b8 00:27:09.557 [2024-11-19 09:29:10.555797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.557 [2024-11-19 09:29:10.555816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:09.557 [2024-11-19 09:29:10.564959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ebfd0 00:27:09.558 [2024-11-19 09:29:10.565897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.558 [2024-11-19 09:29:10.565917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:09.558 [2024-11-19 09:29:10.574682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8a50 00:27:09.558 [2024-11-19 09:29:10.575159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.558 [2024-11-19 09:29:10.575180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:09.558 [2024-11-19 09:29:10.584615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e23b8 00:27:09.558 [2024-11-19 09:29:10.585205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.558 [2024-11-19 09:29:10.585225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:09.558 [2024-11-19 09:29:10.594539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f4b08 00:27:09.558 [2024-11-19 09:29:10.595256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.558 [2024-11-19 09:29:10.595276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:09.558 [2024-11-19 09:29:10.603468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7818 00:27:09.558 [2024-11-19 09:29:10.604752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.558 [2024-11-19 09:29:10.604771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:09.817 [2024-11-19 09:29:10.613212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ed4e8 00:27:09.817 [2024-11-19 09:29:10.614179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.817 [2024-11-19 09:29:10.614204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:09.817 [2024-11-19 09:29:10.624519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb760 00:27:09.817 [2024-11-19 09:29:10.626073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.817 [2024-11-19 09:29:10.626097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:09.817 [2024-11-19 09:29:10.631207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e84c0 00:27:09.817 [2024-11-19 09:29:10.631880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.817 [2024-11-19 09:29:10.631899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:09.817 [2024-11-19 09:29:10.641113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e3d08 00:27:09.817 [2024-11-19 09:29:10.641932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.817 [2024-11-19 09:29:10.641956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.817 [2024-11-19 09:29:10.651608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1ca0 00:27:09.817 [2024-11-19 09:29:10.652791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.817 [2024-11-19 09:29:10.652811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.660602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f6458 00:27:09.818 [2024-11-19 09:29:10.661674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.661694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.669878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9e10 00:27:09.818 [2024-11-19 09:29:10.670923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.670942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.679788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1ca0 00:27:09.818 [2024-11-19 09:29:10.680959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.680979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.689712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ea248 00:27:09.818 [2024-11-19 09:29:10.691005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.691025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.699620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb328 00:27:09.818 [2024-11-19 09:29:10.701040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.701059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.708066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e23b8 00:27:09.818 [2024-11-19 09:29:10.708900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.708921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.717809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f3a28 00:27:09.818 [2024-11-19 09:29:10.718869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.718889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.728594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e3d08 00:27:09.818 [2024-11-19 09:29:10.730144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.730163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.735516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e4578 00:27:09.818 [2024-11-19 09:29:10.736303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.736323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.745406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fb048 00:27:09.818 [2024-11-19 09:29:10.746324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.746343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.755314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8a50 00:27:09.818 [2024-11-19 09:29:10.756358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.756377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.765211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e0630 00:27:09.818 [2024-11-19 09:29:10.766374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.766393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.775038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e4578 00:27:09.818 [2024-11-19 09:29:10.776348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.776383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.783808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e0ea0 00:27:09.818 [2024-11-19 09:29:10.785086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.785110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.791936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8a50 00:27:09.818 [2024-11-19 09:29:10.792609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.792628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.801830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166dece0 00:27:09.818 [2024-11-19 09:29:10.802626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.802645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.811742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9168 00:27:09.818 [2024-11-19 09:29:10.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.812700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.821618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fa3a0 00:27:09.818 [2024-11-19 09:29:10.822685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.822705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.833368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e0630 00:27:09.818 [2024-11-19 09:29:10.834902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.834921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.840099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec408 00:27:09.818 [2024-11-19 09:29:10.840898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.840917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.849923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f0ff8 00:27:09.818 [2024-11-19 09:29:10.850903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.850922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.861778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e95a0 00:27:09.818 [2024-11-19 09:29:10.863287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.818 [2024-11-19 09:29:10.863306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:09.818 [2024-11-19 09:29:10.868612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:09.819 [2024-11-19 09:29:10.869356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.819 [2024-11-19 09:29:10.869379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:10.077 [2024-11-19 09:29:10.880358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e49b0 00:27:10.078 [2024-11-19 09:29:10.881597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.881620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.890259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f3e60 00:27:10.078 [2024-11-19 09:29:10.891614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.900156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5658 00:27:10.078 [2024-11-19 09:29:10.901635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.901654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.910066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fe2e8 00:27:10.078 [2024-11-19 09:29:10.911673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.911692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.916729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec840 00:27:10.078 [2024-11-19 09:29:10.917483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.917504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.927105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5220 00:27:10.078 [2024-11-19 09:29:10.928204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.928224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.935875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f5be8 00:27:10.078 [2024-11-19 09:29:10.936781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.936801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.945390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166dfdc0 00:27:10.078 [2024-11-19 09:29:10.946363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.946381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.955631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e73e0 00:27:10.078 [2024-11-19 09:29:10.956658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.956678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.965385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ebb98 00:27:10.078 [2024-11-19 09:29:10.966728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.966748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.974992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fe720 00:27:10.078 [2024-11-19 09:29:10.976479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.976498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.983120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ee5c8 00:27:10.078 [2024-11-19 09:29:10.983899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.983918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:10.992440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f9f68 00:27:10.078 [2024-11-19 09:29:10.993559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:10.993578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.001155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ed0b0 00:27:10.078 [2024-11-19 09:29:11.002246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.002265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.010776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1868 00:27:10.078 [2024-11-19 09:29:11.012000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.012019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.019362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e23b8 00:27:10.078 [2024-11-19 09:29:11.020151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.020170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.030869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eea00 00:27:10.078 [2024-11-19 09:29:11.032461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.032487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.037353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e38d0 00:27:10.078 [2024-11-19 09:29:11.038123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.047551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2948 00:27:10.078 [2024-11-19 09:29:11.048705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.048724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.056571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5ec8 00:27:10.078 [2024-11-19 09:29:11.057586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.057605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.066203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaef0 00:27:10.078 [2024-11-19 09:29:11.067185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.075928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9e10 00:27:10.078 [2024-11-19 09:29:11.077055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.085842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:10.078 [2024-11-19 09:29:11.087249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.087268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.095514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fc128 00:27:10.078 [2024-11-19 09:29:11.097006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.097025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.102125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fc128 00:27:10.078 [2024-11-19 09:29:11.102877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.102895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.111481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fb8b8 00:27:10.078 [2024-11-19 09:29:11.112258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.112277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:10.078 [2024-11-19 09:29:11.120986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e27f0 00:27:10.078 [2024-11-19 09:29:11.121435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.078 [2024-11-19 09:29:11.121454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:10.079 [2024-11-19 09:29:11.130751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de038 00:27:10.079 [2024-11-19 09:29:11.131320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.079 [2024-11-19 09:29:11.131344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.140228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de038 00:27:10.338 [2024-11-19 09:29:11.141050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.141073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.150720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de038 00:27:10.338 [2024-11-19 09:29:11.152081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.152104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.160037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:10.338 [2024-11-19 09:29:11.161400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.161419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.167693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fcdd0 00:27:10.338 [2024-11-19 09:29:11.168263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.168283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.176598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec840 00:27:10.338 [2024-11-19 09:29:11.177427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.177447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.185739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f6458 00:27:10.338 [2024-11-19 09:29:11.186516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.186535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.194491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eea00 00:27:10.338 [2024-11-19 09:29:11.195246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.195266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.204290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f9f68 00:27:10.338 [2024-11-19 09:29:11.205170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.205191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.213917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e6300 00:27:10.338 [2024-11-19 09:29:11.214931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.214955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.223155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e1b48 00:27:10.338 [2024-11-19 09:29:11.223833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.223855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.231595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fe2e8 00:27:10.338 [2024-11-19 09:29:11.232348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.232368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.240956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ef270 00:27:10.338 [2024-11-19 09:29:11.241705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.241724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.250105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8618 00:27:10.338 [2024-11-19 09:29:11.250768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.250788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.259451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e8088 00:27:10.338 [2024-11-19 09:29:11.260119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.260138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.271649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e1f80 00:27:10.338 [2024-11-19 09:29:11.273172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.273196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.279616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ef270 00:27:10.338 [2024-11-19 09:29:11.280660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.280695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.289441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8a50 00:27:10.338 [2024-11-19 09:29:11.290745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.290765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.298656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:10.338 [2024-11-19 09:29:11.299729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.299749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.307988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:10.338 [2024-11-19 09:29:11.309058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.309078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.317351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:10.338 [2024-11-19 09:29:11.318404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.318423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.326704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:10.338 [2024-11-19 09:29:11.327736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.327756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.335731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e8088 00:27:10.338 [2024-11-19 09:29:11.336757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.336776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.345505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166edd58 00:27:10.338 [2024-11-19 09:29:11.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.338 [2024-11-19 09:29:11.346829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.338 [2024-11-19 09:29:11.353702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de470 00:27:10.339 [2024-11-19 09:29:11.355127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.339 [2024-11-19 09:29:11.355149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.339 [2024-11-19 09:29:11.361774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fc998 00:27:10.339 [2024-11-19 09:29:11.362442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.339 [2024-11-19 09:29:11.362462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:10.339 [2024-11-19 09:29:11.370921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ef270 00:27:10.339 [2024-11-19 09:29:11.371574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.339 [2024-11-19 09:29:11.371593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:10.339 [2024-11-19 09:29:11.380060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ed4e8 00:27:10.339 [2024-11-19 09:29:11.380697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.339 [2024-11-19 09:29:11.380716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:10.339 [2024-11-19 09:29:11.391149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ed4e8 00:27:10.598 [2024-11-19 09:29:11.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.392441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.400152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f7970 00:27:10.598 [2024-11-19 09:29:11.401131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.409410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f3a28 00:27:10.598 [2024-11-19 09:29:11.410292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.410312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.418033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166efae0 00:27:10.598 [2024-11-19 09:29:11.418740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.418759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.427001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e3d08 00:27:10.598 [2024-11-19 09:29:11.427640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.427660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.436612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2948 00:27:10.598 [2024-11-19 09:29:11.437445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.437466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.445972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8a50 00:27:10.598 [2024-11-19 09:29:11.446846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.446865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.455628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9e10 00:27:10.598 [2024-11-19 09:29:11.456390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.456410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.465128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f3e60 00:27:10.598 [2024-11-19 09:29:11.465776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.465796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.476165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f7970 00:27:10.598 [2024-11-19 09:29:11.477715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.477735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.482752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e4578 00:27:10.598 [2024-11-19 09:29:11.483417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.483437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.493236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e8d30 00:27:10.598 [2024-11-19 09:29:11.494031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.494050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.502198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de470 00:27:10.598 [2024-11-19 09:29:11.503305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.503325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.511454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:10.598 [2024-11-19 09:29:11.512468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.512487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.520174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fb048 00:27:10.598 [2024-11-19 09:29:11.521207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.521227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:10.598 27143.00 IOPS, 106.03 MiB/s [2024-11-19T08:29:11.657Z] [2024-11-19 09:29:11.533061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8e88 00:27:10.598 [2024-11-19 09:29:11.534462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.534483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:10.598 [2024-11-19 09:29:11.542349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:10.598 [2024-11-19 09:29:11.543882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.598 [2024-11-19 09:29:11.543902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.549189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:10.599 [2024-11-19 09:29:11.549980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.549999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.560761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166df118 00:27:10.599 [2024-11-19 09:29:11.561912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.561933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.568380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb328 00:27:10.599 [2024-11-19 09:29:11.568929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.568953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.578858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb328 00:27:10.599 [2024-11-19 09:29:11.579889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.579908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.588003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb328 00:27:10.599 [2024-11-19 09:29:11.589027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.589046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.597139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2510 00:27:10.599 [2024-11-19 09:29:11.598264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.598284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.605729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5658 00:27:10.599 [2024-11-19 09:29:11.606485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.606504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.615086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fcdd0 00:27:10.599 [2024-11-19 09:29:11.615617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.615636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.624608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:10.599 [2024-11-19 09:29:11.625388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.625408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.633750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ed4e8 00:27:10.599 [2024-11-19 09:29:11.634523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.634543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:10.599 [2024-11-19 09:29:11.642902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2d80 00:27:10.599 [2024-11-19 09:29:11.643667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.599 [2024-11-19 09:29:11.643687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.653669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fa3a0 00:27:10.859 [2024-11-19 09:29:11.655016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.655041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.661476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e73e0 00:27:10.859 [2024-11-19 09:29:11.662344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.662367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.670989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f31b8 00:27:10.859 [2024-11-19 09:29:11.671945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.671969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.680689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5a90 00:27:10.859 [2024-11-19 09:29:11.681705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.681725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.690362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e3498 00:27:10.859 [2024-11-19 09:29:11.691563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.691583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.697869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fc998 00:27:10.859 [2024-11-19 09:29:11.698612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.698631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.707087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de8a8 00:27:10.859 [2024-11-19 09:29:11.707835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.707854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.716450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8e88 00:27:10.859 [2024-11-19 09:29:11.717201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.725720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fb480 00:27:10.859 [2024-11-19 09:29:11.726649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.726668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.735069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166df118 00:27:10.859 [2024-11-19 09:29:11.735809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.735829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.744280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e4578 00:27:10.859 [2024-11-19 09:29:11.745007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.745026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.753521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec408 00:27:10.859 [2024-11-19 09:29:11.754250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.754269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.762723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e73e0 00:27:10.859 [2024-11-19 09:29:11.763483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.763502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.773141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5a90 00:27:10.859 [2024-11-19 09:29:11.774339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.774357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.782762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2948 00:27:10.859 [2024-11-19 09:29:11.784091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.784110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.792451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e01f8 00:27:10.859 [2024-11-19 09:29:11.793893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.793912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.801804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ef270 00:27:10.859 [2024-11-19 09:29:11.803293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.859 [2024-11-19 09:29:11.803312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:10.859 [2024-11-19 09:29:11.809693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e4578 00:27:10.860 [2024-11-19 09:29:11.810393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.810413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.819444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f5378 00:27:10.860 [2024-11-19 09:29:11.820214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.820233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.828973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fdeb0 00:27:10.860 [2024-11-19 09:29:11.830049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.830068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.838170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8618 00:27:10.860 [2024-11-19 09:29:11.839269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.839292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.847395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ff3c8 00:27:10.860 [2024-11-19 09:29:11.848508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.848527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.856676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166df118 00:27:10.860 [2024-11-19 09:29:11.857765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.857784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.865865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ee190 00:27:10.860 [2024-11-19 09:29:11.866952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.866971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.875075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f31b8 00:27:10.860 [2024-11-19 09:29:11.876157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.876177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.884264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ecc78 00:27:10.860 [2024-11-19 09:29:11.885346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.885364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.893502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fac10 00:27:10.860 [2024-11-19 09:29:11.894605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.894624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.902762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e5ec8 00:27:10.860 [2024-11-19 09:29:11.903851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.860 [2024-11-19 09:29:11.903870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:10.860 [2024-11-19 09:29:11.912190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1ca0 00:27:11.119 [2024-11-19 09:29:11.913325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.119 [2024-11-19 09:29:11.913348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.119 [2024-11-19 09:29:11.921674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb328 00:27:11.119 [2024-11-19 09:29:11.922832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.119 [2024-11-19 09:29:11.922854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.119 [2024-11-19 09:29:11.931061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:11.119 [2024-11-19 09:29:11.932140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.119 [2024-11-19 09:29:11.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.940332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fc560 00:27:11.120 [2024-11-19 09:29:11.941419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.941439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.948884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166efae0 00:27:11.120 [2024-11-19 09:29:11.949943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.949966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.958521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e1710 00:27:11.120 [2024-11-19 09:29:11.959697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.959716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.968165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f7538 00:27:11.120 [2024-11-19 09:29:11.969483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.969502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.976740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f96f8 00:27:11.120 [2024-11-19 09:29:11.977697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.977716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.985790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eee38 00:27:11.120 [2024-11-19 09:29:11.986744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.986763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:11.994988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ef270 00:27:11.120 [2024-11-19 09:29:11.995934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:11.995956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.004200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f6458 00:27:11.120 [2024-11-19 09:29:12.005161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.005180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.013399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e2c28 00:27:11.120 [2024-11-19 09:29:12.014376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.014395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.022650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec840 00:27:11.120 [2024-11-19 09:29:12.023616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.023635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.031982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fef90 00:27:11.120 [2024-11-19 09:29:12.032980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.033000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.041303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eea00 00:27:11.120 [2024-11-19 09:29:12.042257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.042276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.050554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e3d08 00:27:11.120 [2024-11-19 09:29:12.051523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.051542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.059757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1ca0 00:27:11.120 [2024-11-19 09:29:12.060746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.060765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.069211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb328 00:27:11.120 [2024-11-19 09:29:12.070196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.070215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.078615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7c50 00:27:11.120 [2024-11-19 09:29:12.079585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.079607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.087974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fc560 00:27:11.120 [2024-11-19 09:29:12.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.088938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.097177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f0bc0 00:27:11.120 [2024-11-19 09:29:12.098134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.098153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.106367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fb480 00:27:11.120 [2024-11-19 09:29:12.107319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.107339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.115598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f9f68 00:27:11.120 [2024-11-19 09:29:12.116601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.116620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.125103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f92c0 00:27:11.120 [2024-11-19 09:29:12.125855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.125875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.133833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f6458 00:27:11.120 [2024-11-19 09:29:12.135088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.135107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.141724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166feb58 00:27:11.120 [2024-11-19 09:29:12.142442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.142462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.151337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e4de8 00:27:11.120 [2024-11-19 09:29:12.152182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.152200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.160689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eea00 00:27:11.120 [2024-11-19 09:29:12.161554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.161573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.120 [2024-11-19 09:29:12.170211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f96f8 00:27:11.120 [2024-11-19 09:29:12.171089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.120 [2024-11-19 09:29:12.171111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.180091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fa3a0 00:27:11.380 [2024-11-19 09:29:12.181040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.181064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.188840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f4f40 00:27:11.380 [2024-11-19 09:29:12.189767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.189787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.198458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9168 00:27:11.380 [2024-11-19 09:29:12.199509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.199528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.208096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f0788 00:27:11.380 [2024-11-19 09:29:12.209265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.209284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.217722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e6738 00:27:11.380 [2024-11-19 09:29:12.219016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.219035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.227420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166dece0 00:27:11.380 [2024-11-19 09:29:12.228859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.228877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.235967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2510 00:27:11.380 [2024-11-19 09:29:12.237044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.237063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.245058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e6fa8 00:27:11.380 [2024-11-19 09:29:12.246123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.246142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.254259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f57b0 00:27:11.380 [2024-11-19 09:29:12.255336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.255355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.263444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e3d08 00:27:11.380 [2024-11-19 09:29:12.264517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.264536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.272645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fe2e8 00:27:11.380 [2024-11-19 09:29:12.273707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.380 [2024-11-19 09:29:12.273725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.380 [2024-11-19 09:29:12.281865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f7970 00:27:11.380 [2024-11-19 09:29:12.282851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.282870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.291391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eee38 00:27:11.381 [2024-11-19 09:29:12.292597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.292616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.300102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166fef90 00:27:11.381 [2024-11-19 09:29:12.301295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.301314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.308741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2d80 00:27:11.381 [2024-11-19 09:29:12.309583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.309601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.318103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9e10 00:27:11.381 [2024-11-19 09:29:12.318739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.318762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.328957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e9168 00:27:11.381 [2024-11-19 09:29:12.330392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.330410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.338515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e7818 00:27:11.381 [2024-11-19 09:29:12.339917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.347647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166de8a8 00:27:11.381 [2024-11-19 09:29:12.349110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.349129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.357325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ebb98 00:27:11.381 [2024-11-19 09:29:12.358842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.358861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.363815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f57b0 00:27:11.381 [2024-11-19 09:29:12.364544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.364564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.372624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e1b48 00:27:11.381 [2024-11-19 09:29:12.373296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.373315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.382261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec408 00:27:11.381 [2024-11-19 09:29:12.383052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.383071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.392081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f8a50 00:27:11.381 [2024-11-19 09:29:12.392992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.393012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.401718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eaab8 00:27:11.381 [2024-11-19 09:29:12.402761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.402780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.411385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e6b70 00:27:11.381 [2024-11-19 09:29:12.412542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.412561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.421013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ec408 00:27:11.381 [2024-11-19 09:29:12.422343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.422363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.381 [2024-11-19 09:29:12.430765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ff3c8 00:27:11.381 [2024-11-19 09:29:12.432249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.381 [2024-11-19 09:29:12.432272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.439669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166df988 00:27:11.640 [2024-11-19 09:29:12.440734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.440756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.448130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f7970 00:27:11.640 [2024-11-19 09:29:12.449376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.449396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.457709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166eb760 00:27:11.640 [2024-11-19 09:29:12.458512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.458532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.467128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ed0b0 00:27:11.640 [2024-11-19 09:29:12.468224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.468243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.476554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1430 00:27:11.640 [2024-11-19 09:29:12.477625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.477645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.485204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f1868 00:27:11.640 [2024-11-19 09:29:12.486248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.486267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.494656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f9f68 00:27:11.640 [2024-11-19 09:29:12.495745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.495765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.503451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166f2948 00:27:11.640 [2024-11-19 09:29:12.504387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.504406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.512945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e6b70 00:27:11.640 [2024-11-19 09:29:12.513886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.513904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.640 [2024-11-19 09:29:12.522672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166e6738 00:27:11.640 [2024-11-19 09:29:12.523774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.523794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.640 27369.50 IOPS, 106.91 MiB/s [2024-11-19T08:29:12.699Z] [2024-11-19 09:29:12.533104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4280) with pdu=0x2000166ddc00 00:27:11.640 [2024-11-19 09:29:12.533962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.640 [2024-11-19 09:29:12.533980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.640 00:27:11.640 Latency(us) 00:27:11.640 [2024-11-19T08:29:12.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.640 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.640 nvme0n1 : 2.00 27371.64 106.92 0.00 0.00 4670.43 2279.51 12879.25 00:27:11.640 [2024-11-19T08:29:12.699Z] =================================================================================================================== 00:27:11.640 [2024-11-19T08:29:12.699Z] Total : 27371.64 106.92 0.00 0.00 4670.43 2279.51 12879.25 00:27:11.640 { 00:27:11.640 "results": [ 00:27:11.640 { 00:27:11.640 "job": "nvme0n1", 00:27:11.640 "core_mask": "0x2", 00:27:11.640 "workload": "randwrite", 00:27:11.640 "status": "finished", 00:27:11.640 "queue_depth": 128, 00:27:11.640 "io_size": 4096, 00:27:11.640 "runtime": 2.00452, 00:27:11.640 "iops": 27371.64009338894, 00:27:11.640 "mibps": 106.92046911480055, 00:27:11.640 "io_failed": 0, 00:27:11.640 "io_timeout": 0, 00:27:11.640 "avg_latency_us": 4670.4256176794315, 00:27:11.640 "min_latency_us": 2279.513043478261, 00:27:11.640 "max_latency_us": 12879.248695652173 00:27:11.640 } 00:27:11.640 ], 00:27:11.640 "core_count": 1 00:27:11.640 } 00:27:11.640 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:11.640 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:11.641 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:11.641 | .driver_specific 00:27:11.641 | .nvme_error 00:27:11.641 | .status_code 00:27:11.641 | .command_transient_transport_error' 00:27:11.641 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1261164 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1261164 ']' 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1261164 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1261164 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1261164' 00:27:11.899 killing process with pid 1261164 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1261164 00:27:11.899 Received shutdown signal, test time was about 2.000000 seconds 00:27:11.899 00:27:11.899 Latency(us) 00:27:11.899 [2024-11-19T08:29:12.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.899 [2024-11-19T08:29:12.958Z] =================================================================================================================== 00:27:11.899 [2024-11-19T08:29:12.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:11.899 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1261164 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1261797 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1261797 /var/tmp/bperf.sock 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1261797 ']' 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:12.158 09:29:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.158 [2024-11-19 09:29:13.028907] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:12.158 [2024-11-19 09:29:13.028971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261797 ] 00:27:12.158 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.158 Zero copy mechanism will not be used. 00:27:12.158 [2024-11-19 09:29:13.102994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.158 [2024-11-19 09:29:13.145418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.416 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.674 nvme0n1 00:27:12.674 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:12.674 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.674 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.674 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.674 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:12.674 09:29:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:12.933 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.934 Zero copy mechanism will not be used. 00:27:12.934 Running I/O for 2 seconds... 00:27:12.934 [2024-11-19 09:29:13.801791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.802060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.802089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.807943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.808220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.808245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.813591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.813841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.813864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.818888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.819143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.819165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.823858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.824124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.824145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.829306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.829557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.829579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.834757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.835008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.835030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.839589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.839849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.839870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.844677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.844922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.844943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.849569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.849829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.849851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.854689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.854936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.854963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.859570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.859831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.859852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.864421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.864668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.864689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.869184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.869430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.869451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.873766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.874022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.874043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.878598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.878847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.878868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.883365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.883615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.883636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.888665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.888928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.888954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.894302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.894561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.894586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.899484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.899734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.899756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.904308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.904567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.904587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.909659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.909918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.909939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.914769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.915034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.915055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.920040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.920290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.920311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.925533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.934 [2024-11-19 09:29:13.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.934 [2024-11-19 09:29:13.925910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.934 [2024-11-19 09:29:13.931227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.931473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.931494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.937207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.937454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.937475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.942475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.942729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.942749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.948034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.948284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.948305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.953175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.953423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.953444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.958158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.958409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.958430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.963459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.963709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.963730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.968116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.968369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.968390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.972887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.973140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.973162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.979430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.979680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.979702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.935 [2024-11-19 09:29:13.985394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:12.935 [2024-11-19 09:29:13.985658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.935 [2024-11-19 09:29:13.985684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.195 [2024-11-19 09:29:13.993267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.195 [2024-11-19 09:29:13.993535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.195 [2024-11-19 09:29:13.993560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.195 [2024-11-19 09:29:14.000282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.195 [2024-11-19 09:29:14.000540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.195 [2024-11-19 09:29:14.000563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.195 [2024-11-19 09:29:14.007486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.195 [2024-11-19 09:29:14.007745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.195 [2024-11-19 09:29:14.007767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.195 [2024-11-19 09:29:14.013061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.195 [2024-11-19 09:29:14.013310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.195 [2024-11-19 09:29:14.013332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.195 [2024-11-19 09:29:14.017695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.017940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.017967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.022431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.022680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.022701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.026909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.027159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.027180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.031676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.031927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.031953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.036452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.036701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.036728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.041242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.041490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.041511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.045984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.046237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.046258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.050686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.050937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.050964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.055197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.055465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.055487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.059913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.060174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.060196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.064816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.065074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.065096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.070323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.070581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.070602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.075911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.076173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.076204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.082092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.082349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.082370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.088329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.088582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.088604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.095816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.096070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.096092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.101743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.101999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.102020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.107290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.107542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.107564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.113360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.113610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.113632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.120375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.120637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.120659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.196 [2024-11-19 09:29:14.126813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.196 [2024-11-19 09:29:14.127082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.196 [2024-11-19 09:29:14.127104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.132853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.133115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.133137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.139588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.139844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.139865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.146997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.147261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.147283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.153514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.153779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.153801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.160165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.160426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.160449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.166026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.166275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.166297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.171204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.171453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.171474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.176269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.176518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.176539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.180927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.181187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.181208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.186178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.186443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.186468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.191294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.191542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.191564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.197223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.197476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.197496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.202033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.202292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.202313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.206896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.207162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.207184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.211469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.211720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.211741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.215900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.216163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.216184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.220754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.221010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.221031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.225566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.225817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.225838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.229969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.230231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.230254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.234361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.234610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.234632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.238862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.239146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.239168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.243464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.243734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.197 [2024-11-19 09:29:14.243755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.197 [2024-11-19 09:29:14.247999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.197 [2024-11-19 09:29:14.248255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.198 [2024-11-19 09:29:14.248281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.252573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.252821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.252846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.257247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.257498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.257522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.261771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.262024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.262057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.266405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.266653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.266680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.271493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.271748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.271770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.277574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.277824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.277846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.283686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.283938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.283966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.290153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.290405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.290427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.295963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.296230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.296251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.458 [2024-11-19 09:29:14.302719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.458 [2024-11-19 09:29:14.302976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.458 [2024-11-19 09:29:14.302997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.308056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.308310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.308332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.312595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.312851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.312873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.317076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.317336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.317358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.321556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.321810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.321831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.326030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.326287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.326308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.330481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.330738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.330759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.335080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.335350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.335372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.339720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.339978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.340000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.344134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.344391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.344412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.348651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.348925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.353166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.353416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.353437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.357994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.358244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.358266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.363800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.364071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.364093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.370000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.370251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.370273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.374821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.375066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.375088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.379814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.380092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.384842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.385098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.385119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.389830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.390086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.390108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.394772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.395027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.395048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.399706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.399962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.399988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.404905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.405161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.405182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.409992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.410277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.415485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.415756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.420059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.420312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.420333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.424733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.424988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.459 [2024-11-19 09:29:14.425009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.459 [2024-11-19 09:29:14.429292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.459 [2024-11-19 09:29:14.429541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.429562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.434044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.434294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.434316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.438539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.438788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.438810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.443021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.443275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.443295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.447506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.447768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.447789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.452027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.452276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.452297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.456413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.456660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.456681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.460864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.461131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.461151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.465455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.465705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.465726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.470252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.470517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.470538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.474980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.475228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.475249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.479689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.479940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.479967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.484676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.484747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.484767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.490573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.490658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.490676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.497075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.497323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.497344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.503477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.503737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.503758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.460 [2024-11-19 09:29:14.510613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.460 [2024-11-19 09:29:14.510881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.460 [2024-11-19 09:29:14.510906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.517807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.518084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.518110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.524777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.525045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.525068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.531317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.531565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.531587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.537861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.538125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.538152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.544587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.544847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.544870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.550004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.550256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.550278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.555265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.555517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.555539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.560706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.560980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.561000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.565719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.565778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.571348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.571601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.571622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.576307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.576546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.581759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.581995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.582016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.586752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.586988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.587009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.592139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.592380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.592401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.597246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.597439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.597459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.602101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.720 [2024-11-19 09:29:14.602333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.720 [2024-11-19 09:29:14.602354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.720 [2024-11-19 09:29:14.607223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.607454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.607475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.612311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.612545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.612566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.617110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.617345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.617366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.622220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.622453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.627048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.627280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.627305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.631586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.631805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.631826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.636177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.636394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.636416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.641502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.641721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.641741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.646626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.646849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.646870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.651107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.651328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.651349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.655780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.656006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.656027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.660086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.660306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.660327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.664459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.664677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.664698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.668910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.669145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.669165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.673342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.673559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.673580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.677845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.678086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.678119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.682489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.682706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.682727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.686818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.691063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.691285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.691305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.695488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.695708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.695729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.700220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.700441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.700461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.705681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.705900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.705920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.710583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.710809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.710831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.715185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.715403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.715425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.719671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.719910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.724158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.721 [2024-11-19 09:29:14.724376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.721 [2024-11-19 09:29:14.724397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.721 [2024-11-19 09:29:14.728620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.728838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.728860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.733106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.733326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.733347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.737851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.738071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.738101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.742343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.742563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.742584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.746606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.746825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.746851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.751085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.751318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.751340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.755816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.756043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.756064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.760747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.760970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.760992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.765710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.765926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.765952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.722 [2024-11-19 09:29:14.770257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.722 [2024-11-19 09:29:14.770481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.722 [2024-11-19 09:29:14.770506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.774877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.775113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.775139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.780208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.780431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.780456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.785057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.785277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.785299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.789503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.789727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.789749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.982 5963.00 IOPS, 745.38 MiB/s [2024-11-19T08:29:15.041Z] [2024-11-19 09:29:14.795223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.795453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.800150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.800383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.800405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.804884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.805124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.805144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.809372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.809602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.809623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.813693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.813923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.813945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.817965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.818202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.818224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.822273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.822510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.826543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.826783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.826803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.830818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.831060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.831081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.835030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.835270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.835292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.839321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.839557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.839578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.843552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.843791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.843812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.847739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.847977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.847998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.851891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.852135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.852157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.856114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.856346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.856367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.860259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.860489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.860509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.864486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.864718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.864743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.868625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.868855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.868876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.872930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.982 [2024-11-19 09:29:14.873168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.982 [2024-11-19 09:29:14.873190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.982 [2024-11-19 09:29:14.877611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.877843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.877863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.883002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.883234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.883255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.887880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.888119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.888140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.892534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.892768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.892789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.897098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.897332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.897353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.901569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.901800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.901821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.906137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.906366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.906386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.910570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.910800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.910821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.915071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.915304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.919592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.919823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.919844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.924402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.924636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.924657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.928930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.929169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.929190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.933421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.933652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.933673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.938003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.938234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.938256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.942258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.942490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.946726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.946965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.946986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.951277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.951511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.951532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.956067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.956299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.956319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.961202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.961430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.961451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.966394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.966624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.966645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.970888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.971126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.971146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.975393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.975623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.975644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.979864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.980099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.980120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.984324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.984562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.984583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.988597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.988827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.988848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.992844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.993080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.983 [2024-11-19 09:29:14.993101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.983 [2024-11-19 09:29:14.997126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.983 [2024-11-19 09:29:14.997362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:14.997383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.001464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.001705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.001726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.006742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.007047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.012730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.012975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.012996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.017501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.017733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.017754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.022580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.022814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.022835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.027486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.027721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.027742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.984 [2024-11-19 09:29:15.032314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:13.984 [2024-11-19 09:29:15.032540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.984 [2024-11-19 09:29:15.032565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.243 [2024-11-19 09:29:15.037054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.243 [2024-11-19 09:29:15.037276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.243 [2024-11-19 09:29:15.037301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.243 [2024-11-19 09:29:15.041618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.243 [2024-11-19 09:29:15.041841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.243 [2024-11-19 09:29:15.041867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.243 [2024-11-19 09:29:15.046378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.243 [2024-11-19 09:29:15.046596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.243 [2024-11-19 09:29:15.046618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.243 [2024-11-19 09:29:15.052159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.243 [2024-11-19 09:29:15.052430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.243 [2024-11-19 09:29:15.052452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.243 [2024-11-19 09:29:15.057743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.243 [2024-11-19 09:29:15.057967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.243 [2024-11-19 09:29:15.057988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.243 [2024-11-19 09:29:15.062451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.062668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.062689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.067097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.067315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.067340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.071763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.071993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.072014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.076525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.076748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.076770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.081270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.081492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.081513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.086116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.086346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.086367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.090922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.091155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.091176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.095662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.095880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.095902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.100379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.100598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.100620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.105106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.105331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.105353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.109792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.110021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.114433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.114648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.114669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.119188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.119419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.119439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.123870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.124099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.124120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.128821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.129049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.129070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.133643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.133862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.133884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.138368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.138587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.138609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.143489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.143717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.143738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.149148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.149391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.154366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.154584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.154605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.159442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.159660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.159681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.164249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.164467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.164488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.169115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.169335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.169355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.173765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.173992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.174013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.178597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.178815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.178836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.183922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.184147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.184169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.244 [2024-11-19 09:29:15.188811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.244 [2024-11-19 09:29:15.189035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.244 [2024-11-19 09:29:15.189056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.193853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.194078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.194103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.199083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.199307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.199330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.203932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.204159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.204181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.209360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.209577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.209597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.214303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.214520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.214540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.218996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.219216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.219236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.224192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.224409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.224430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.229150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.229367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.229388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.234135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.234352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.234373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.239257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.239480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.239501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.243917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.244147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.244169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.248851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.249075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.249095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.253924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.254152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.254174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.258840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.259064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.259085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.264499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.264718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.264738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.269365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.269586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.269607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.274260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.274478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.274498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.279204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.279425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.279450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.283874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.284095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.284117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.288339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.288555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.288575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.245 [2024-11-19 09:29:15.293127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.245 [2024-11-19 09:29:15.293365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.245 [2024-11-19 09:29:15.293393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.298277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.505 [2024-11-19 09:29:15.298499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.505 [2024-11-19 09:29:15.298524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.303741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.505 [2024-11-19 09:29:15.303977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.505 [2024-11-19 09:29:15.304001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.308301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.505 [2024-11-19 09:29:15.308519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.505 [2024-11-19 09:29:15.308541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.312720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.505 [2024-11-19 09:29:15.312936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.505 [2024-11-19 09:29:15.312964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.317002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.505 [2024-11-19 09:29:15.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.505 [2024-11-19 09:29:15.317243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.321323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.505 [2024-11-19 09:29:15.321552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.505 [2024-11-19 09:29:15.321574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.505 [2024-11-19 09:29:15.325643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.325866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.325887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.329943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.330177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.330210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.334253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.334472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.334493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.338534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.338750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.338771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.342758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.342981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.343002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.347023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.347242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.347264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.351263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.351479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.351500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.355504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.355721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.355742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.359689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.359907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.359929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.363853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.364076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.364097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.368063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.368283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.368304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.372553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.372773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.372794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.376932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.377162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.377183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.381136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.381354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.381374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.385361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.385581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.385602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.389573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.389793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.389813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.393773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.393997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.394022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.397983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.398204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.398225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.402161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.402380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.402400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.406880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.407104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.407127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.412660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.412965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.412986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.418671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.418983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.424886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.506 [2024-11-19 09:29:15.425202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.506 [2024-11-19 09:29:15.425223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.506 [2024-11-19 09:29:15.431021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.431318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.431338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.436979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.437277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.437299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.443714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.443995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.444016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.450454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.450717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.450737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.457110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.457407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.457429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.464075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.464364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.464385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.471377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.471711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.471733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.478727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.478884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.478902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.486630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.486737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.494391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.494494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.494514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.501737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.501794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.501812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.508129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.508247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.508266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.514487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.514539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.514558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.519406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.519459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.519478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.524662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.524713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.524732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.531036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.531085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.531105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.536101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.536162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.536181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.541324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.541375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.541395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.546216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.546269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.546288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.550915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.550967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.550990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.507 [2024-11-19 09:29:15.556144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.507 [2024-11-19 09:29:15.556207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.507 [2024-11-19 09:29:15.556229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.561138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.561189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.561211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.566696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.566769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.571450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.571500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.571519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.576552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.576605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.576625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.581624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.581683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.581703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.586349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.586409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.586427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.590989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.591044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.591064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.596387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.596455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.596475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.601688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.601758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.601778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.606278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.606332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.606351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.610818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.610874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.610893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.615243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.615299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.615318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.619708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.619772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.619791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.624236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.624303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.624322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.628880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.628937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.628964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.633426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.633479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.633498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.637922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.767 [2024-11-19 09:29:15.637988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.767 [2024-11-19 09:29:15.638008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.767 [2024-11-19 09:29:15.642378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.642431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.642450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.646853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.646923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.646942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.651399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.651460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.651479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.655900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.655972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.655992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.660335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.660396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.660416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.664839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.664892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.664912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.669550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.669606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.669626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.674599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.674650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.674673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.679693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.679754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.679773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.684269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.684339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.684358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.688702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.688766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.688785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.693150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.693205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.693224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.697528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.697581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.697601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.701843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.701909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.701929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.706315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.706366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.706385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.710880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.710935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.710962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.716433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.716488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.716507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.721301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.721367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.721386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.725757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.725843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.725862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.730264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.730314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.730334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.734706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.734760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.734780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.739096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.739150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.739169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.743532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.743588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.743606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.748269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.748362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.748381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.753504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.753557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.753579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.758450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.758535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.758555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.768 [2024-11-19 09:29:15.763011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.768 [2024-11-19 09:29:15.763071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.768 [2024-11-19 09:29:15.763089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.769 [2024-11-19 09:29:15.768001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.768056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.768075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.769 [2024-11-19 09:29:15.773027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.773083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.773102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.769 [2024-11-19 09:29:15.777674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.777728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.777747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.769 [2024-11-19 09:29:15.782237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.782292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.782311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.769 [2024-11-19 09:29:15.786593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.786646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.786666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.769 [2024-11-19 09:29:15.790876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.790931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.790956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.769 6177.50 IOPS, 772.19 MiB/s [2024-11-19T08:29:15.828Z] [2024-11-19 09:29:15.796599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16b4760) with pdu=0x2000166fef90 00:27:14.769 [2024-11-19 09:29:15.796658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:14.769 [2024-11-19 09:29:15.796678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.769 00:27:14.769 Latency(us) 00:27:14.769 [2024-11-19T08:29:15.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.769 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:14.769 nvme0n1 : 2.00 6174.95 771.87 0.00 0.00 2586.57 1852.10 10086.85 00:27:14.769 [2024-11-19T08:29:15.828Z] =================================================================================================================== 00:27:14.769 [2024-11-19T08:29:15.828Z] Total : 6174.95 771.87 0.00 0.00 2586.57 1852.10 10086.85 00:27:14.769 { 00:27:14.769 "results": [ 00:27:14.769 { 00:27:14.769 "job": "nvme0n1", 00:27:14.769 "core_mask": "0x2", 00:27:14.769 "workload": "randwrite", 00:27:14.769 "status": "finished", 00:27:14.769 "queue_depth": 16, 00:27:14.769 "io_size": 131072, 00:27:14.769 "runtime": 2.00358, 00:27:14.769 "iops": 6174.946845147187, 00:27:14.769 "mibps": 771.8683556433983, 00:27:14.769 "io_failed": 0, 00:27:14.769 "io_timeout": 0, 00:27:14.769 "avg_latency_us": 2586.5688120440263, 00:27:14.769 "min_latency_us": 1852.104347826087, 00:27:14.769 "max_latency_us": 10086.845217391305 00:27:14.769 } 00:27:14.769 ], 00:27:14.769 "core_count": 1 00:27:14.769 } 00:27:15.027 09:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.027 09:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.027 09:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.027 | .driver_specific 00:27:15.027 | .nvme_error 00:27:15.027 | .status_code 00:27:15.027 | .command_transient_transport_error' 00:27:15.027 09:29:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 399 > 0 )) 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1261797 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1261797 ']' 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1261797 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:15.027 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1261797 00:27:15.285 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:15.285 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:15.285 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1261797' 00:27:15.285 killing process with pid 1261797 00:27:15.285 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1261797 00:27:15.285 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.285 00:27:15.285 Latency(us) 00:27:15.285 [2024-11-19T08:29:16.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.285 [2024-11-19T08:29:16.344Z] =================================================================================================================== 00:27:15.285 [2024-11-19T08:29:16.344Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.285 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1261797 00:27:15.285 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1259982 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1259982 ']' 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1259982 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1259982 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1259982' 00:27:15.286 killing process with pid 1259982 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1259982 00:27:15.286 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1259982 00:27:15.545 00:27:15.545 real 0m13.823s 00:27:15.545 user 0m26.597s 00:27:15.545 sys 0m4.477s 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:15.545 ************************************ 00:27:15.545 END TEST nvmf_digest_error 00:27:15.545 ************************************ 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:15.545 rmmod nvme_tcp 00:27:15.545 rmmod nvme_fabrics 00:27:15.545 rmmod nvme_keyring 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1259982 ']' 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1259982 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1259982 ']' 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1259982 00:27:15.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1259982) - No such process 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1259982 is not found' 00:27:15.545 Process with pid 1259982 is not found 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.545 09:29:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:18.081 00:27:18.081 real 0m36.060s 00:27:18.081 user 0m55.022s 00:27:18.081 sys 0m13.543s 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:18.081 ************************************ 00:27:18.081 END TEST nvmf_digest 00:27:18.081 ************************************ 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.081 ************************************ 00:27:18.081 START TEST nvmf_bdevperf 00:27:18.081 ************************************ 00:27:18.081 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:18.082 * Looking for test storage... 00:27:18.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.082 --rc genhtml_branch_coverage=1 00:27:18.082 --rc genhtml_function_coverage=1 00:27:18.082 --rc genhtml_legend=1 00:27:18.082 --rc geninfo_all_blocks=1 00:27:18.082 --rc geninfo_unexecuted_blocks=1 00:27:18.082 00:27:18.082 ' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.082 --rc genhtml_branch_coverage=1 00:27:18.082 --rc genhtml_function_coverage=1 00:27:18.082 --rc genhtml_legend=1 00:27:18.082 --rc geninfo_all_blocks=1 00:27:18.082 --rc geninfo_unexecuted_blocks=1 00:27:18.082 00:27:18.082 ' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.082 --rc genhtml_branch_coverage=1 00:27:18.082 --rc genhtml_function_coverage=1 00:27:18.082 --rc genhtml_legend=1 00:27:18.082 --rc geninfo_all_blocks=1 00:27:18.082 --rc geninfo_unexecuted_blocks=1 00:27:18.082 00:27:18.082 ' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:18.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.082 --rc genhtml_branch_coverage=1 00:27:18.082 --rc genhtml_function_coverage=1 00:27:18.082 --rc genhtml_legend=1 00:27:18.082 --rc geninfo_all_blocks=1 00:27:18.082 --rc geninfo_unexecuted_blocks=1 00:27:18.082 00:27:18.082 ' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:18.082 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.083 09:29:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.655 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.656 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.656 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:27:24.656 00:27:24.656 --- 10.0.0.2 ping statistics --- 00:27:24.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.656 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:27:24.656 00:27:24.656 --- 10.0.0.1 ping statistics --- 00:27:24.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.656 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1265853 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1265853 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1265853 ']' 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:24.656 09:29:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.656 [2024-11-19 09:29:24.860537] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:24.656 [2024-11-19 09:29:24.860582] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.656 [2024-11-19 09:29:24.938584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:24.656 [2024-11-19 09:29:24.978773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.656 [2024-11-19 09:29:24.978811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.656 [2024-11-19 09:29:24.978819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.656 [2024-11-19 09:29:24.978826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.656 [2024-11-19 09:29:24.978831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.656 [2024-11-19 09:29:24.980259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.657 [2024-11-19 09:29:24.980346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.657 [2024-11-19 09:29:24.980346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 [2024-11-19 09:29:25.124738] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 Malloc0 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 [2024-11-19 09:29:25.195198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:24.657 { 00:27:24.657 "params": { 00:27:24.657 "name": "Nvme$subsystem", 00:27:24.657 "trtype": "$TEST_TRANSPORT", 00:27:24.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.657 "adrfam": "ipv4", 00:27:24.657 "trsvcid": "$NVMF_PORT", 00:27:24.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.657 "hdgst": ${hdgst:-false}, 00:27:24.657 "ddgst": ${ddgst:-false} 00:27:24.657 }, 00:27:24.657 "method": "bdev_nvme_attach_controller" 00:27:24.657 } 00:27:24.657 EOF 00:27:24.657 )") 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:24.657 09:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:24.657 "params": { 00:27:24.657 "name": "Nvme1", 00:27:24.657 "trtype": "tcp", 00:27:24.657 "traddr": "10.0.0.2", 00:27:24.657 "adrfam": "ipv4", 00:27:24.657 "trsvcid": "4420", 00:27:24.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.657 "hdgst": false, 00:27:24.657 "ddgst": false 00:27:24.657 }, 00:27:24.657 "method": "bdev_nvme_attach_controller" 00:27:24.657 }' 00:27:24.657 [2024-11-19 09:29:25.248585] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:24.657 [2024-11-19 09:29:25.248628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265882 ] 00:27:24.657 [2024-11-19 09:29:25.323637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.657 [2024-11-19 09:29:25.365405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.657 Running I/O for 1 seconds... 00:27:25.592 10868.00 IOPS, 42.45 MiB/s 00:27:25.592 Latency(us) 00:27:25.592 [2024-11-19T08:29:26.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.592 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:25.592 Verification LBA range: start 0x0 length 0x4000 00:27:25.592 Nvme1n1 : 1.01 10942.15 42.74 0.00 0.00 11638.00 2151.29 10770.70 00:27:25.592 [2024-11-19T08:29:26.651Z] =================================================================================================================== 00:27:25.592 [2024-11-19T08:29:26.651Z] Total : 10942.15 42.74 0.00 0.00 11638.00 2151.29 10770.70 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1266116 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:25.850 { 00:27:25.850 "params": { 00:27:25.850 "name": "Nvme$subsystem", 00:27:25.850 "trtype": "$TEST_TRANSPORT", 00:27:25.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.850 "adrfam": "ipv4", 00:27:25.850 "trsvcid": "$NVMF_PORT", 00:27:25.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.850 "hdgst": ${hdgst:-false}, 00:27:25.850 "ddgst": ${ddgst:-false} 00:27:25.850 }, 00:27:25.850 "method": "bdev_nvme_attach_controller" 00:27:25.850 } 00:27:25.850 EOF 00:27:25.850 )") 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:25.850 09:29:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:25.850 "params": { 00:27:25.850 "name": "Nvme1", 00:27:25.850 "trtype": "tcp", 00:27:25.850 "traddr": "10.0.0.2", 00:27:25.850 "adrfam": "ipv4", 00:27:25.850 "trsvcid": "4420", 00:27:25.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:25.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:25.850 "hdgst": false, 00:27:25.850 "ddgst": false 00:27:25.850 }, 00:27:25.850 "method": "bdev_nvme_attach_controller" 00:27:25.850 }' 00:27:25.850 [2024-11-19 09:29:26.779992] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:25.850 [2024-11-19 09:29:26.780039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266116 ] 00:27:25.850 [2024-11-19 09:29:26.855140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.850 [2024-11-19 09:29:26.893988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.109 Running I/O for 15 seconds... 00:27:28.050 11082.00 IOPS, 43.29 MiB/s [2024-11-19T08:29:30.047Z] 11036.50 IOPS, 43.11 MiB/s [2024-11-19T08:29:30.047Z] 09:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1265853 00:27:28.988 09:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:28.988 [2024-11-19 09:29:29.755075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.988 [2024-11-19 09:29:29.755633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.988 [2024-11-19 09:29:29.755642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.755988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.755997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.989 [2024-11-19 09:29:29.756213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-19 09:29:29.756228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-19 09:29:29.756243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-19 09:29:29.756259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-19 09:29:29.756275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-19 09:29:29.756290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.989 [2024-11-19 09:29:29.756305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.989 [2024-11-19 09:29:29.756313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:28.990 [2024-11-19 09:29:29.756697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.990 [2024-11-19 09:29:29.756897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.990 [2024-11-19 09:29:29.756907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.756914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.756922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.756928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.756937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.756943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.756958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.756973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.756980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.756988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.756995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.991 [2024-11-19 09:29:29.757207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.757214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc23d00 is same with the state(6) to be set 00:27:28.991 [2024-11-19 09:29:29.757224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:28.991 [2024-11-19 09:29:29.757231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:28.991 [2024-11-19 09:29:29.757237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103928 len:8 PRP1 0x0 PRP2 0x0 00:27:28.991 [2024-11-19 09:29:29.757245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.991 [2024-11-19 09:29:29.760106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.991 [2024-11-19 09:29:29.760162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.991 [2024-11-19 09:29:29.760632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-11-19 09:29:29.760649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.991 [2024-11-19 09:29:29.760657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.991 [2024-11-19 09:29:29.760838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.991 [2024-11-19 09:29:29.761025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.991 [2024-11-19 09:29:29.761035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.991 [2024-11-19 09:29:29.761045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.991 [2024-11-19 09:29:29.761054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.991 [2024-11-19 09:29:29.773357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.991 [2024-11-19 09:29:29.773815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-11-19 09:29:29.773861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.991 [2024-11-19 09:29:29.773887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.991 [2024-11-19 09:29:29.774415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.991 [2024-11-19 09:29:29.774590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.991 [2024-11-19 09:29:29.774600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.991 [2024-11-19 09:29:29.774607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.991 [2024-11-19 09:29:29.774614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.991 [2024-11-19 09:29:29.786166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.991 [2024-11-19 09:29:29.786588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-11-19 09:29:29.786606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.991 [2024-11-19 09:29:29.786613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.991 [2024-11-19 09:29:29.786777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.991 [2024-11-19 09:29:29.786941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.991 [2024-11-19 09:29:29.786962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.991 [2024-11-19 09:29:29.786968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.991 [2024-11-19 09:29:29.786975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.991 [2024-11-19 09:29:29.799101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.991 [2024-11-19 09:29:29.799524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.991 [2024-11-19 09:29:29.799541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.991 [2024-11-19 09:29:29.799550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.991 [2024-11-19 09:29:29.799713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.991 [2024-11-19 09:29:29.799877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.991 [2024-11-19 09:29:29.799886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.991 [2024-11-19 09:29:29.799893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.991 [2024-11-19 09:29:29.799899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.991 [2024-11-19 09:29:29.812022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.991 [2024-11-19 09:29:29.812433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.812477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.812501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.813095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.813516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.813525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.813532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.813538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.824932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.825344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.825389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.825412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.826006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.826512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.826521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.826527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.826534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.837731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.838139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.838157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.838165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.838328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.838492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.838502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.838508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.838516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.850533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.850956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.850974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.850981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.851144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.851308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.851317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.851324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.851330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.863378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.863752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.863770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.863777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.863941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.864135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.864145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.864152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.864158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.876283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.876635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.876656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.876663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.876826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.877013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.877023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.877030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.877037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.889118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.889518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.889543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.889707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.889870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.889880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.889886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.889892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.901950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.902275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.902292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.992 [2024-11-19 09:29:29.902299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.992 [2024-11-19 09:29:29.902462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.992 [2024-11-19 09:29:29.902626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.992 [2024-11-19 09:29:29.902635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.992 [2024-11-19 09:29:29.902641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.992 [2024-11-19 09:29:29.902648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.992 [2024-11-19 09:29:29.914764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.992 [2024-11-19 09:29:29.915191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.992 [2024-11-19 09:29:29.915208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.915216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.915383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.915547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.915557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.915563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.915569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:29.927673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:29.927998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:29.928016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.928023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.928186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.928348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.928357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.928364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.928371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:29.940667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:29.941083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:29.941122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.941148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.941676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.941841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.941850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.941856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.941863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:29.953670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:29.954109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:29.954156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.954180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.954701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.954877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.954886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.954898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.954904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:29.966528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:29.966897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:29.966915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.966922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.967100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.967283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.967292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.967299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.967305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:29.979508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:29.979918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:29.979975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.980000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.980582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.981088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.981098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.981105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.981111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:29.992421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:29.992840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:29.992890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:29.992914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:29.993507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:29.993792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:29.993801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:29.993808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:29.993815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:30.005534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:30.005924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:30.005942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:30.005956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:30.006134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:30.006312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:30.006322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:30.006330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:30.006337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:30.018676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:30.019059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:30.019078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:30.019087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:30.019265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:30.019444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:30.019454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:30.019462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:30.019469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:28.993 [2024-11-19 09:29:30.031866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:28.993 [2024-11-19 09:29:30.032310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.993 [2024-11-19 09:29:30.032328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:28.993 [2024-11-19 09:29:30.032337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:28.993 [2024-11-19 09:29:30.032516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:28.993 [2024-11-19 09:29:30.032695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:28.993 [2024-11-19 09:29:30.032705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:28.993 [2024-11-19 09:29:30.032713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:28.993 [2024-11-19 09:29:30.032725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.253 [2024-11-19 09:29:30.045196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.253 [2024-11-19 09:29:30.045629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.253 [2024-11-19 09:29:30.045651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.253 [2024-11-19 09:29:30.045659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.045837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.046023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.046034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.046041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.046047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.058280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.058635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.058652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.058660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.058838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.059024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.059035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.059042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.059049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.071473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.071876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.071895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.071904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.072087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.072266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.072276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.072283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.072290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.084688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.085125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.085145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.085153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.085331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.085515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.085525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.085532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.085539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.097871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.098171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.098190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.098198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.098377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.098557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.098567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.098574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.098581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 9770.33 IOPS, 38.17 MiB/s [2024-11-19T08:29:30.313Z] [2024-11-19 09:29:30.111075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.111444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.111463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.111472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.111656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.111831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.111841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.111848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.111856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.124271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.124610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.124628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.124636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.124814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.124997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.125008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.125019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.125026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.137270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.138192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.138218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.138227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.138417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.138597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.138608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.138615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.138622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.150259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.150619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.150638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.150646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.150818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.151014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.151025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.151032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.151039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.163370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.163756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.254 [2024-11-19 09:29:30.163803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.254 [2024-11-19 09:29:30.163827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.254 [2024-11-19 09:29:30.164362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.254 [2024-11-19 09:29:30.164544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.254 [2024-11-19 09:29:30.164555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.254 [2024-11-19 09:29:30.164562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.254 [2024-11-19 09:29:30.164569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.254 [2024-11-19 09:29:30.176490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.254 [2024-11-19 09:29:30.176764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.176782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.176789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.176972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.177152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.177162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.177169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.177176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.189700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.189972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.189990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.189998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.190170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.190361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.190371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.190378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.190385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.202734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.203029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.203047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.203055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.203228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.203402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.203413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.203420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.203427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.215860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.216155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.216173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.216184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.216363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.216542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.216552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.216559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.216567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.228905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.229198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.229227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.229235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.229407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.229582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.229591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.229598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.229604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.242039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.242409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.242426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.242434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.242606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.242780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.242790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.242796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.242804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.255129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.255500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.255546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.255569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.256163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.256625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.256635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.256642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.256648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.268255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.268675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.268693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.268701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.268885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.269064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.269075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.269081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.269088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.281355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.281646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.281665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.281673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.281850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.282035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.282046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.282053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.282060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.255 [2024-11-19 09:29:30.294444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.255 [2024-11-19 09:29:30.294720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.255 [2024-11-19 09:29:30.294738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.255 [2024-11-19 09:29:30.294746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.255 [2024-11-19 09:29:30.294924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.255 [2024-11-19 09:29:30.295109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.255 [2024-11-19 09:29:30.295120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.255 [2024-11-19 09:29:30.295143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.255 [2024-11-19 09:29:30.295151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.307590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.307923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.307941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.307954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.308133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.308312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.308322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.308329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.308336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.320723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.321173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.321191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.321199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.321371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.321545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.321555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.321562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.321568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.333783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.334072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.334090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.334098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.334282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.334456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.334467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.334473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.334480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.346884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.347240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.347258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.347267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.347445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.347625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.347635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.347642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.347649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.359906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.360209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.360227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.360235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.360413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.360594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.360604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.360610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.360617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.372881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.373216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.373234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.373242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.373415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.373589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.373599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.373606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.373613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.385965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.386379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.386425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.386457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.386941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.387122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.387133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.387140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.387147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.398993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.399381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.399426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.399450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.399707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.399882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.399891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.399898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.399905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.412097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.412436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.412454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.412462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.516 [2024-11-19 09:29:30.412635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.516 [2024-11-19 09:29:30.412809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.516 [2024-11-19 09:29:30.412819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.516 [2024-11-19 09:29:30.412826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.516 [2024-11-19 09:29:30.412832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.516 [2024-11-19 09:29:30.425171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.516 [2024-11-19 09:29:30.425450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.516 [2024-11-19 09:29:30.425467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.516 [2024-11-19 09:29:30.425475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.425648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.425826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.425835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.425842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.425848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.438243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.438517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.438534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.438543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.438717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.438891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.438901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.438908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.438914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.451354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.451646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.451690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.451713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.452243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.452418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.452428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.452435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.452441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.464348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.464630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.464648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.464657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.464829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.465009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.465019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.465031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.465038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.477410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.477702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.477721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.477729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.477907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.478091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.478102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.478110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.478117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.490367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.490718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.490760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.490786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.491325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.491500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.491510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.491517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.491524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.503422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.503851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.503869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.503877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.504053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.504228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.504238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.504245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.504252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.516424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.516774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.516791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.516800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.516994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.517173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.517183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.517190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.517197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.529792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.530234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.530253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.530261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.530434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.530609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.530619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.530626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.530633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.542840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.543276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.517 [2024-11-19 09:29:30.543322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.517 [2024-11-19 09:29:30.543346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.517 [2024-11-19 09:29:30.543928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.517 [2024-11-19 09:29:30.544124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.517 [2024-11-19 09:29:30.544135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.517 [2024-11-19 09:29:30.544142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.517 [2024-11-19 09:29:30.544148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.517 [2024-11-19 09:29:30.555810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.517 [2024-11-19 09:29:30.556177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.518 [2024-11-19 09:29:30.556195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.518 [2024-11-19 09:29:30.556206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.518 [2024-11-19 09:29:30.556379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.518 [2024-11-19 09:29:30.556553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.518 [2024-11-19 09:29:30.556563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.518 [2024-11-19 09:29:30.556570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.518 [2024-11-19 09:29:30.556576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.518 [2024-11-19 09:29:30.568894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.777 [2024-11-19 09:29:30.569266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.777 [2024-11-19 09:29:30.569284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.777 [2024-11-19 09:29:30.569292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.777 [2024-11-19 09:29:30.569470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.777 [2024-11-19 09:29:30.569651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.777 [2024-11-19 09:29:30.569661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.777 [2024-11-19 09:29:30.569669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.777 [2024-11-19 09:29:30.569676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.777 [2024-11-19 09:29:30.581892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.777 [2024-11-19 09:29:30.582299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.777 [2024-11-19 09:29:30.582317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.777 [2024-11-19 09:29:30.582326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.777 [2024-11-19 09:29:30.582498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.777 [2024-11-19 09:29:30.582671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.777 [2024-11-19 09:29:30.582681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.777 [2024-11-19 09:29:30.582688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.777 [2024-11-19 09:29:30.582695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.777 [2024-11-19 09:29:30.594937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.777 [2024-11-19 09:29:30.595305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.595324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.595332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.595505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.595683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.595693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.778 [2024-11-19 09:29:30.595700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.778 [2024-11-19 09:29:30.595708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.778 [2024-11-19 09:29:30.608059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.778 [2024-11-19 09:29:30.608379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.608397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.608405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.608578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.608751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.608762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.778 [2024-11-19 09:29:30.608768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.778 [2024-11-19 09:29:30.608775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.778 [2024-11-19 09:29:30.621192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.778 [2024-11-19 09:29:30.621553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.621596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.621620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.622215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.622411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.622421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.778 [2024-11-19 09:29:30.622428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.778 [2024-11-19 09:29:30.622435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.778 [2024-11-19 09:29:30.634176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.778 [2024-11-19 09:29:30.634590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.634607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.634614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.634778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.634942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.634958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.778 [2024-11-19 09:29:30.634969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.778 [2024-11-19 09:29:30.634976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.778 [2024-11-19 09:29:30.647207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.778 [2024-11-19 09:29:30.647611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.647656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.647680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.648276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.648486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.648495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.778 [2024-11-19 09:29:30.648501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.778 [2024-11-19 09:29:30.648508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.778 [2024-11-19 09:29:30.660098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.778 [2024-11-19 09:29:30.660519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.660536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.660543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.660708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.660872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.660882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.778 [2024-11-19 09:29:30.660889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.778 [2024-11-19 09:29:30.660896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.778 [2024-11-19 09:29:30.672953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.778 [2024-11-19 09:29:30.673264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.778 [2024-11-19 09:29:30.673281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.778 [2024-11-19 09:29:30.673289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.778 [2024-11-19 09:29:30.673452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.778 [2024-11-19 09:29:30.673616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.778 [2024-11-19 09:29:30.673625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.673631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.673637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.685849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.686283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.686328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.686351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.779 [2024-11-19 09:29:30.686827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.779 [2024-11-19 09:29:30.687015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.779 [2024-11-19 09:29:30.687025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.687032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.687039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.698778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.699230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.699274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.699297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.779 [2024-11-19 09:29:30.699815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.779 [2024-11-19 09:29:30.700028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.779 [2024-11-19 09:29:30.700039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.700046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.700052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.711725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.712150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.712196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.712221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.779 [2024-11-19 09:29:30.712717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.779 [2024-11-19 09:29:30.712882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.779 [2024-11-19 09:29:30.712891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.712898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.712904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.724591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.724932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.724957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.724968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.779 [2024-11-19 09:29:30.725131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.779 [2024-11-19 09:29:30.725295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.779 [2024-11-19 09:29:30.725305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.725311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.725318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.737539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.737964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.738013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.738037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.779 [2024-11-19 09:29:30.738469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.779 [2024-11-19 09:29:30.738635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.779 [2024-11-19 09:29:30.738644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.738651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.738657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.750514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.750894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.750911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.750919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.779 [2024-11-19 09:29:30.751088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.779 [2024-11-19 09:29:30.751253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.779 [2024-11-19 09:29:30.751262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.779 [2024-11-19 09:29:30.751269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.779 [2024-11-19 09:29:30.751275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.779 [2024-11-19 09:29:30.763356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.779 [2024-11-19 09:29:30.763753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.779 [2024-11-19 09:29:30.763769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.779 [2024-11-19 09:29:30.763777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.780 [2024-11-19 09:29:30.763940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.780 [2024-11-19 09:29:30.764140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.780 [2024-11-19 09:29:30.764151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.780 [2024-11-19 09:29:30.764157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.780 [2024-11-19 09:29:30.764164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.780 [2024-11-19 09:29:30.776240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.780 [2024-11-19 09:29:30.776632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.780 [2024-11-19 09:29:30.776650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.780 [2024-11-19 09:29:30.776657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.780 [2024-11-19 09:29:30.776822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.780 [2024-11-19 09:29:30.777010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.780 [2024-11-19 09:29:30.777020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.780 [2024-11-19 09:29:30.777027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.780 [2024-11-19 09:29:30.777034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.780 [2024-11-19 09:29:30.789434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.780 [2024-11-19 09:29:30.789803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.780 [2024-11-19 09:29:30.789821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.780 [2024-11-19 09:29:30.789829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.780 [2024-11-19 09:29:30.790020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.780 [2024-11-19 09:29:30.790195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.780 [2024-11-19 09:29:30.790217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.780 [2024-11-19 09:29:30.790224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.780 [2024-11-19 09:29:30.790230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.780 [2024-11-19 09:29:30.802483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.780 [2024-11-19 09:29:30.802914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.780 [2024-11-19 09:29:30.802971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.780 [2024-11-19 09:29:30.802997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.780 [2024-11-19 09:29:30.803576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.780 [2024-11-19 09:29:30.804088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.780 [2024-11-19 09:29:30.804099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.780 [2024-11-19 09:29:30.804105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.780 [2024-11-19 09:29:30.804114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.780 [2024-11-19 09:29:30.815478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.780 [2024-11-19 09:29:30.815911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.780 [2024-11-19 09:29:30.815966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.780 [2024-11-19 09:29:30.815992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.780 [2024-11-19 09:29:30.816572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.780 [2024-11-19 09:29:30.817003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.780 [2024-11-19 09:29:30.817013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.780 [2024-11-19 09:29:30.817020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.780 [2024-11-19 09:29:30.817027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:29.780 [2024-11-19 09:29:30.828580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:29.780 [2024-11-19 09:29:30.829030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.780 [2024-11-19 09:29:30.829049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:29.780 [2024-11-19 09:29:30.829057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:29.780 [2024-11-19 09:29:30.829234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:29.780 [2024-11-19 09:29:30.829414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:29.780 [2024-11-19 09:29:30.829424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:29.780 [2024-11-19 09:29:30.829431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:29.780 [2024-11-19 09:29:30.829438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.041 [2024-11-19 09:29:30.841541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.041 [2024-11-19 09:29:30.841977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-19 09:29:30.842023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.041 [2024-11-19 09:29:30.842047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.041 [2024-11-19 09:29:30.842628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.041 [2024-11-19 09:29:30.842863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.041 [2024-11-19 09:29:30.842872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.041 [2024-11-19 09:29:30.842879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.041 [2024-11-19 09:29:30.842886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.041 [2024-11-19 09:29:30.854382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.041 [2024-11-19 09:29:30.854820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-19 09:29:30.854866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.041 [2024-11-19 09:29:30.854889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.041 [2024-11-19 09:29:30.855484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.041 [2024-11-19 09:29:30.856018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.041 [2024-11-19 09:29:30.856029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.041 [2024-11-19 09:29:30.856036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.041 [2024-11-19 09:29:30.856042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.041 [2024-11-19 09:29:30.867210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.041 [2024-11-19 09:29:30.867634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-19 09:29:30.867680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.041 [2024-11-19 09:29:30.867704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.041 [2024-11-19 09:29:30.868113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.041 [2024-11-19 09:29:30.868288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.041 [2024-11-19 09:29:30.868298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.041 [2024-11-19 09:29:30.868305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.041 [2024-11-19 09:29:30.868312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.041 [2024-11-19 09:29:30.880181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.041 [2024-11-19 09:29:30.880542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.041 [2024-11-19 09:29:30.880587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.041 [2024-11-19 09:29:30.880611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.041 [2024-11-19 09:29:30.881081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.041 [2024-11-19 09:29:30.881255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.041 [2024-11-19 09:29:30.881265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.881271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.881278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.893079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.893494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.893542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.893567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.894143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.894318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.894327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.894333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.894340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.905900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.906319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.906336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.906344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.906508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.906672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.906681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.906688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.906694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.918729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.919086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.919104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.919111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.919274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.919437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.919446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.919453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.919459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.931580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.931916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.931933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.931941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.932133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.932308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.932321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.932328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.932335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.944493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.944904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.944921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.944929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.945100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.945265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.945275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.945281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.945288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.957417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.957836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.957882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.957907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.958309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.958474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.958483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.958490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.958496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.970286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.970744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.970789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.970812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.971298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.971473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.971483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.971490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.971499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.983221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.983635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.983652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.983660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.983831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.984020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.984031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.984037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.984044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:30.996122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:30.996516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:30.996532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:30.996540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:30.996703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:30.996868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.042 [2024-11-19 09:29:30.996877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.042 [2024-11-19 09:29:30.996884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.042 [2024-11-19 09:29:30.996890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.042 [2024-11-19 09:29:31.009091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.042 [2024-11-19 09:29:31.009430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.042 [2024-11-19 09:29:31.009446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.042 [2024-11-19 09:29:31.009455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.042 [2024-11-19 09:29:31.009618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.042 [2024-11-19 09:29:31.009782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.009792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.009798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.009804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.043 [2024-11-19 09:29:31.021956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.043 [2024-11-19 09:29:31.022369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-19 09:29:31.022404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.043 [2024-11-19 09:29:31.022430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.043 [2024-11-19 09:29:31.023023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.043 [2024-11-19 09:29:31.023517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.023526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.023532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.023539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.043 [2024-11-19 09:29:31.034871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.043 [2024-11-19 09:29:31.035284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-19 09:29:31.035302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.043 [2024-11-19 09:29:31.035309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.043 [2024-11-19 09:29:31.035482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.043 [2024-11-19 09:29:31.035656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.035665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.035672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.035679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.043 [2024-11-19 09:29:31.048041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.043 [2024-11-19 09:29:31.048473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-19 09:29:31.048490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.043 [2024-11-19 09:29:31.048499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.043 [2024-11-19 09:29:31.048676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.043 [2024-11-19 09:29:31.048855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.048865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.048872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.048878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.043 [2024-11-19 09:29:31.061040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.043 [2024-11-19 09:29:31.061409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-19 09:29:31.061427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.043 [2024-11-19 09:29:31.061435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.043 [2024-11-19 09:29:31.061602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.043 [2024-11-19 09:29:31.061765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.061775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.061782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.061788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.043 [2024-11-19 09:29:31.073833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.043 [2024-11-19 09:29:31.074228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-19 09:29:31.074246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.043 [2024-11-19 09:29:31.074254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.043 [2024-11-19 09:29:31.074416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.043 [2024-11-19 09:29:31.074581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.074591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.074598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.074604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.043 [2024-11-19 09:29:31.086709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.043 [2024-11-19 09:29:31.087041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.043 [2024-11-19 09:29:31.087058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.043 [2024-11-19 09:29:31.087066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.043 [2024-11-19 09:29:31.087229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.043 [2024-11-19 09:29:31.087392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.043 [2024-11-19 09:29:31.087401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.043 [2024-11-19 09:29:31.087408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.043 [2024-11-19 09:29:31.087414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.303 [2024-11-19 09:29:31.099777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.303 [2024-11-19 09:29:31.100207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.303 [2024-11-19 09:29:31.100225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.303 [2024-11-19 09:29:31.100233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.303 [2024-11-19 09:29:31.100410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.303 [2024-11-19 09:29:31.100590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.303 [2024-11-19 09:29:31.100603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.303 [2024-11-19 09:29:31.100610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.303 [2024-11-19 09:29:31.100616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.303 7327.75 IOPS, 28.62 MiB/s [2024-11-19T08:29:31.362Z] [2024-11-19 09:29:31.112568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.303 [2024-11-19 09:29:31.112993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.303 [2024-11-19 09:29:31.113043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.303 [2024-11-19 09:29:31.113067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.303 [2024-11-19 09:29:31.113654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.303 [2024-11-19 09:29:31.113819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.303 [2024-11-19 09:29:31.113828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.303 [2024-11-19 09:29:31.113835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.303 [2024-11-19 09:29:31.113841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.303 [2024-11-19 09:29:31.125398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.303 [2024-11-19 09:29:31.125751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.303 [2024-11-19 09:29:31.125769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.303 [2024-11-19 09:29:31.125776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.303 [2024-11-19 09:29:31.125939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.303 [2024-11-19 09:29:31.126134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.303 [2024-11-19 09:29:31.126144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.303 [2024-11-19 09:29:31.126151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.303 [2024-11-19 09:29:31.126158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.303 [2024-11-19 09:29:31.138332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.303 [2024-11-19 09:29:31.138676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.303 [2024-11-19 09:29:31.138721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.303 [2024-11-19 09:29:31.138745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.303 [2024-11-19 09:29:31.139341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.303 [2024-11-19 09:29:31.139830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.303 [2024-11-19 09:29:31.139840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.303 [2024-11-19 09:29:31.139846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.303 [2024-11-19 09:29:31.139856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.303 [2024-11-19 09:29:31.151257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.303 [2024-11-19 09:29:31.151648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.303 [2024-11-19 09:29:31.151665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.303 [2024-11-19 09:29:31.151673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.303 [2024-11-19 09:29:31.151837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.303 [2024-11-19 09:29:31.152023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.303 [2024-11-19 09:29:31.152033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.303 [2024-11-19 09:29:31.152040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.303 [2024-11-19 09:29:31.152047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.303 [2024-11-19 09:29:31.164129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.303 [2024-11-19 09:29:31.164525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.303 [2024-11-19 09:29:31.164543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.303 [2024-11-19 09:29:31.164551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.303 [2024-11-19 09:29:31.164714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.303 [2024-11-19 09:29:31.164877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.303 [2024-11-19 09:29:31.164888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.303 [2024-11-19 09:29:31.164894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.164900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.176964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.177306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.177323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.177331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.177493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.177657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.177667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.177673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.177680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.189899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.190264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.190309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.190332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.190808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.190995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.191005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.191012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.191019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.203012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.203424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.203442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.203449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.203613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.203777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.203786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.203793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.203800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.215936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.216366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.216411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.216435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.216919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.217111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.217121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.217128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.217135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.228841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.229260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.229277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.229285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.229451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.229615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.229624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.229630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.229636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.241780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.242206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.242252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.242276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.242732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.242896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.242907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.242913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.242920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.254608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.255021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.255039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.255047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.255210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.255374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.255384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.255391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.255397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.304 [2024-11-19 09:29:31.267477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.304 [2024-11-19 09:29:31.267889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.304 [2024-11-19 09:29:31.267906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.304 [2024-11-19 09:29:31.267914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.304 [2024-11-19 09:29:31.268104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.304 [2024-11-19 09:29:31.268278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.304 [2024-11-19 09:29:31.268291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.304 [2024-11-19 09:29:31.268298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.304 [2024-11-19 09:29:31.268304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.305 [2024-11-19 09:29:31.280489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.305 [2024-11-19 09:29:31.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.305 [2024-11-19 09:29:31.280822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.305 [2024-11-19 09:29:31.280829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.305 [2024-11-19 09:29:31.281016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.305 [2024-11-19 09:29:31.281190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.305 [2024-11-19 09:29:31.281200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.305 [2024-11-19 09:29:31.281207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.305 [2024-11-19 09:29:31.281213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.305 [2024-11-19 09:29:31.293334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.305 [2024-11-19 09:29:31.293725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.305 [2024-11-19 09:29:31.293743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.305 [2024-11-19 09:29:31.293751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.305 [2024-11-19 09:29:31.293923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.305 [2024-11-19 09:29:31.294105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.305 [2024-11-19 09:29:31.294115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.305 [2024-11-19 09:29:31.294122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.305 [2024-11-19 09:29:31.294130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.305 [2024-11-19 09:29:31.306502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.305 [2024-11-19 09:29:31.306838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.305 [2024-11-19 09:29:31.306856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.305 [2024-11-19 09:29:31.306863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.305 [2024-11-19 09:29:31.307048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.305 [2024-11-19 09:29:31.307238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.305 [2024-11-19 09:29:31.307248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.305 [2024-11-19 09:29:31.307254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.305 [2024-11-19 09:29:31.307264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.305 [2024-11-19 09:29:31.319364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.305 [2024-11-19 09:29:31.319692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.305 [2024-11-19 09:29:31.319708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.305 [2024-11-19 09:29:31.319716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.305 [2024-11-19 09:29:31.319879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.305 [2024-11-19 09:29:31.320047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.305 [2024-11-19 09:29:31.320057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.305 [2024-11-19 09:29:31.320063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.305 [2024-11-19 09:29:31.320069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.305 [2024-11-19 09:29:31.332360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.305 [2024-11-19 09:29:31.332764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.305 [2024-11-19 09:29:31.332782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.305 [2024-11-19 09:29:31.332790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.305 [2024-11-19 09:29:31.332958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.305 [2024-11-19 09:29:31.333148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.305 [2024-11-19 09:29:31.333158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.305 [2024-11-19 09:29:31.333165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.305 [2024-11-19 09:29:31.333171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.305 [2024-11-19 09:29:31.345187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.305 [2024-11-19 09:29:31.345599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.305 [2024-11-19 09:29:31.345616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.305 [2024-11-19 09:29:31.345624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.305 [2024-11-19 09:29:31.345789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.305 [2024-11-19 09:29:31.345960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.305 [2024-11-19 09:29:31.345970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.305 [2024-11-19 09:29:31.345976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.305 [2024-11-19 09:29:31.345983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.565 [2024-11-19 09:29:31.358299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.565 [2024-11-19 09:29:31.358729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.565 [2024-11-19 09:29:31.358782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.565 [2024-11-19 09:29:31.358807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.565 [2024-11-19 09:29:31.359341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.565 [2024-11-19 09:29:31.359517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.565 [2024-11-19 09:29:31.359527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.565 [2024-11-19 09:29:31.359534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.565 [2024-11-19 09:29:31.359541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.565 [2024-11-19 09:29:31.371094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.565 [2024-11-19 09:29:31.371502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.565 [2024-11-19 09:29:31.371518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.371526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.371689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.371854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.371863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.371869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.371876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.383905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.384314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.384331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.384337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.384501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.384665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.384674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.384681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.384688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.396779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.397171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.397188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.397195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.397365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.397529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.397538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.397544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.397551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.409714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.410138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.410184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.410208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.410789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.411336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.411347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.411353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.411360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.422604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.423024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.423066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.423092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.423672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.424271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.424299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.424320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.424339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.435402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.435770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.435788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.435795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.435964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.436153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.436166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.436173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.436179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.448239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.448581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.448598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.448607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.448778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.448964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.448990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.448998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.449005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.461172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.461590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.461607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.461615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.461779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.461943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.461959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.461965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.461973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.474016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.474429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.474464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.566 [2024-11-19 09:29:31.474490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.566 [2024-11-19 09:29:31.475085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.566 [2024-11-19 09:29:31.475569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.566 [2024-11-19 09:29:31.475578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.566 [2024-11-19 09:29:31.475585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.566 [2024-11-19 09:29:31.475591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.566 [2024-11-19 09:29:31.486860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.566 [2024-11-19 09:29:31.487289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.566 [2024-11-19 09:29:31.487335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.487359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.487940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.488479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.488488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.488495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.488501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.499844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.500287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.500332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.500355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.500769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.500943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.500960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.500967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.500974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.512823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.513263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.513309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.513333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.513816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.513997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.514008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.514016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.514023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.525883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.526219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.526239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.526247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.526411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.526575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.526584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.526590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.526597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.538760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.539172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.539190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.539198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.539375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.539541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.539550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.539556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.539563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.551943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.552235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.552252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.552261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.552437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.552616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.552627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.552634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.552640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.565092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.565430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.565448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.565456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.565629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.565806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.565817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.565824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.565831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.578049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.578382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.578399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.578408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.578580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.578755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.578765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.578771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.578778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.591054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.591341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.591359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.591367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.591539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.591713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.591723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.591729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.591736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.604107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.604509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.567 [2024-11-19 09:29:31.604527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.567 [2024-11-19 09:29:31.604535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.567 [2024-11-19 09:29:31.604707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.567 [2024-11-19 09:29:31.604880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.567 [2024-11-19 09:29:31.604890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.567 [2024-11-19 09:29:31.604901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.567 [2024-11-19 09:29:31.604907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.567 [2024-11-19 09:29:31.617277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.567 [2024-11-19 09:29:31.617659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.568 [2024-11-19 09:29:31.617677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.568 [2024-11-19 09:29:31.617686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.568 [2024-11-19 09:29:31.617865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.568 [2024-11-19 09:29:31.618052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.568 [2024-11-19 09:29:31.618063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.568 [2024-11-19 09:29:31.618070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.568 [2024-11-19 09:29:31.618076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.828 [2024-11-19 09:29:31.630277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.828 [2024-11-19 09:29:31.630739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.828 [2024-11-19 09:29:31.630785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.828 [2024-11-19 09:29:31.630810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.828 [2024-11-19 09:29:31.631405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.828 [2024-11-19 09:29:31.631930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.828 [2024-11-19 09:29:31.631939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.828 [2024-11-19 09:29:31.631946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.828 [2024-11-19 09:29:31.631959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.828 [2024-11-19 09:29:31.643269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.828 [2024-11-19 09:29:31.643667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.828 [2024-11-19 09:29:31.643684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.828 [2024-11-19 09:29:31.643693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.828 [2024-11-19 09:29:31.643867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.828 [2024-11-19 09:29:31.644045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.828 [2024-11-19 09:29:31.644056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.828 [2024-11-19 09:29:31.644062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.828 [2024-11-19 09:29:31.644070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.828 [2024-11-19 09:29:31.656169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.828 [2024-11-19 09:29:31.656490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.828 [2024-11-19 09:29:31.656508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.828 [2024-11-19 09:29:31.656516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.828 [2024-11-19 09:29:31.656680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.828 [2024-11-19 09:29:31.656846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.828 [2024-11-19 09:29:31.656857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.828 [2024-11-19 09:29:31.656864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.828 [2024-11-19 09:29:31.656870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.828 [2024-11-19 09:29:31.669199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.828 [2024-11-19 09:29:31.669480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.828 [2024-11-19 09:29:31.669497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.828 [2024-11-19 09:29:31.669505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.828 [2024-11-19 09:29:31.669668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.669831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.669840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.669846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.669853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.682165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.682545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.682563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.682571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.682744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.682917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.682926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.682933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.682939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.695203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.695482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.695500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.695511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.695685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.695860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.695870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.695877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.695884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.708194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.708540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.708558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.708566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.708729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.708893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.708903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.708910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.708916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.721110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.721398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.721415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.721423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.721595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.721769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.721779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.721786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.721793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.734080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.734464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.734480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.734487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.734651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.734818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.734828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.734834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.734841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.747108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.747442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.747460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.747469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.747632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.747796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.747806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.747815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.747821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.760451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.760875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.829 [2024-11-19 09:29:31.760892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.829 [2024-11-19 09:29:31.760900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.829 [2024-11-19 09:29:31.761083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.829 [2024-11-19 09:29:31.761263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.829 [2024-11-19 09:29:31.761273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.829 [2024-11-19 09:29:31.761280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.829 [2024-11-19 09:29:31.761287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.829 [2024-11-19 09:29:31.773530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.829 [2024-11-19 09:29:31.773960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.773978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.773986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.774164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.774343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.774353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.774363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.774371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.786595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.787074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.787093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.787101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.787278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.787457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.787467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.787474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.787481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.799687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.800117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.800137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.800145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.800323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.800504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.800514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.800521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.800529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.812751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.813184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.813202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.813210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.813389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.813569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.813579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.813586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.813593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.825846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.826293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.826311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.826318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.826497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.826676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.826686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.826693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.826700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.838920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.839382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.839400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.839408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.839586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.839766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.839775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.839783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.839790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.851998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.852408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.852425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.852433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.852610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.852789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.852799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.852805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.830 [2024-11-19 09:29:31.852812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.830 [2024-11-19 09:29:31.865149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.830 [2024-11-19 09:29:31.865572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.830 [2024-11-19 09:29:31.865591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.830 [2024-11-19 09:29:31.865603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.830 [2024-11-19 09:29:31.865787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.830 [2024-11-19 09:29:31.865977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.830 [2024-11-19 09:29:31.865988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.830 [2024-11-19 09:29:31.865995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.831 [2024-11-19 09:29:31.866002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:30.831 [2024-11-19 09:29:31.878236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:30.831 [2024-11-19 09:29:31.878671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.831 [2024-11-19 09:29:31.878690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:30.831 [2024-11-19 09:29:31.878698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:30.831 [2024-11-19 09:29:31.878875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:30.831 [2024-11-19 09:29:31.879064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:30.831 [2024-11-19 09:29:31.879075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:30.831 [2024-11-19 09:29:31.879082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:30.831 [2024-11-19 09:29:31.879089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.891463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.891895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.891914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.891922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.892105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.892286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.892295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.892302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.892309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.904576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.905006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.905024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.905032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.905212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.905395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.905405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.905412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.905418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.917781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.918211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.918230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.918238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.918416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.918595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.918606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.918612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.918619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.930820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.931275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.931283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.931461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.931642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.931652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.931659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.931666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.943940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.944332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.944349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.944358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.944529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.944702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.944712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.944722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.944730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.956766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.957107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.957124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.957132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.957296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.957460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.957470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.957476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.957482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.969608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.970032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.970050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.970058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.970230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.970404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.970414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.970421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.970428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.982534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.982952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.983001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.983026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.983573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.983773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.983792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.983806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.983820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:31.997435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:31.997893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:31.997938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:31.997977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:31.998571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:31.998825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.093 [2024-11-19 09:29:31.998838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.093 [2024-11-19 09:29:31.998850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.093 [2024-11-19 09:29:31.998861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.093 [2024-11-19 09:29:32.010467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.093 [2024-11-19 09:29:32.010796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.093 [2024-11-19 09:29:32.010814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.093 [2024-11-19 09:29:32.010821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.093 [2024-11-19 09:29:32.010993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.093 [2024-11-19 09:29:32.011163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.011173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.011180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.011186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.023415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.023846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.023890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.023914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.024468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.024859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.024877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.024893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.024907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.038192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.038681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.038704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.038722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.038983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.039240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.039253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.039262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.039272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.051238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.051671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.051716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.051739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.052336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.052907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.052918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.052924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.052931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.064039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.064410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.064427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.064436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.064599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.064763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.064772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.064779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.064785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.077232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.077668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.077686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.077694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.077872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.078060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.078071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.078078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.078085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.090193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.090614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.090631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.090639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.090802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.090971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.090981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.090987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.090994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.103193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.103598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.103643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.103667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.104185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.104356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.104366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.104373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.104379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 5862.20 IOPS, 22.90 MiB/s [2024-11-19T08:29:32.153Z] [2024-11-19 09:29:32.116208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.116610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.116655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.116679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.117096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.117261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.117270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.117280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.117287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.129260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.129587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.129632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.129655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.130187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.130352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.130361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.130368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.130374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.094 [2024-11-19 09:29:32.142155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.094 [2024-11-19 09:29:32.142479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.094 [2024-11-19 09:29:32.142497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.094 [2024-11-19 09:29:32.142506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.094 [2024-11-19 09:29:32.142688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.094 [2024-11-19 09:29:32.142873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.094 [2024-11-19 09:29:32.142883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.094 [2024-11-19 09:29:32.142889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.094 [2024-11-19 09:29:32.142896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.155166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.355 [2024-11-19 09:29:32.155521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.355 [2024-11-19 09:29:32.155539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.355 [2024-11-19 09:29:32.155546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.355 [2024-11-19 09:29:32.155719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.355 [2024-11-19 09:29:32.155892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.355 [2024-11-19 09:29:32.155902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.355 [2024-11-19 09:29:32.155908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.355 [2024-11-19 09:29:32.155915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.168066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.355 [2024-11-19 09:29:32.168483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.355 [2024-11-19 09:29:32.168500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.355 [2024-11-19 09:29:32.168507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.355 [2024-11-19 09:29:32.168671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.355 [2024-11-19 09:29:32.168835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.355 [2024-11-19 09:29:32.168844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.355 [2024-11-19 09:29:32.168851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.355 [2024-11-19 09:29:32.168857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.180995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.355 [2024-11-19 09:29:32.181411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.355 [2024-11-19 09:29:32.181452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.355 [2024-11-19 09:29:32.181478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.355 [2024-11-19 09:29:32.182069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.355 [2024-11-19 09:29:32.182235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.355 [2024-11-19 09:29:32.182244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.355 [2024-11-19 09:29:32.182251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.355 [2024-11-19 09:29:32.182257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.193914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.355 [2024-11-19 09:29:32.194333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.355 [2024-11-19 09:29:32.194377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.355 [2024-11-19 09:29:32.194403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.355 [2024-11-19 09:29:32.194965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.355 [2024-11-19 09:29:32.195130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.355 [2024-11-19 09:29:32.195139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.355 [2024-11-19 09:29:32.195145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.355 [2024-11-19 09:29:32.195152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.206795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.355 [2024-11-19 09:29:32.207215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.355 [2024-11-19 09:29:32.207259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.355 [2024-11-19 09:29:32.207291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.355 [2024-11-19 09:29:32.207872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.355 [2024-11-19 09:29:32.208184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.355 [2024-11-19 09:29:32.208195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.355 [2024-11-19 09:29:32.208202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.355 [2024-11-19 09:29:32.208209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.219675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.355 [2024-11-19 09:29:32.220068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.355 [2024-11-19 09:29:32.220086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.355 [2024-11-19 09:29:32.220094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.355 [2024-11-19 09:29:32.220257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.355 [2024-11-19 09:29:32.220420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.355 [2024-11-19 09:29:32.220429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.355 [2024-11-19 09:29:32.220436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.355 [2024-11-19 09:29:32.220442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.355 [2024-11-19 09:29:32.232517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.232916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.232970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.232995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.233426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.233591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.233600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.233607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.233613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.245417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.245765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.245781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.245789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.245957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.246149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.246159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.246166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.246173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.258269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.258663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.258680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.258688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.258851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.259036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.259046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.259053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.259060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.271182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.271616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.271660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.271684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.272223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.272388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.272397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.272404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.272410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.283991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.284301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.284319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.284326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.284489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.284653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.284662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.284668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.284678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.296785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.297200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.297217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.297224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.297388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.297551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.297561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.297568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.297574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.309684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.310017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.310062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.310086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.310342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.310507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.310516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.310523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.310529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.322502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.322860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.322878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.322886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.323065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.323239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.323249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.323256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.323263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.335670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.336105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.336123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.336131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.336308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.336487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.336497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.336504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.356 [2024-11-19 09:29:32.336511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.356 [2024-11-19 09:29:32.348594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.356 [2024-11-19 09:29:32.349012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.356 [2024-11-19 09:29:32.349030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.356 [2024-11-19 09:29:32.349038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.356 [2024-11-19 09:29:32.349210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.356 [2024-11-19 09:29:32.349383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.356 [2024-11-19 09:29:32.349393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.356 [2024-11-19 09:29:32.349400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.357 [2024-11-19 09:29:32.349407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.357 [2024-11-19 09:29:32.361431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.357 [2024-11-19 09:29:32.361860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.357 [2024-11-19 09:29:32.361877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.357 [2024-11-19 09:29:32.361884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.357 [2024-11-19 09:29:32.362072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.357 [2024-11-19 09:29:32.362246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.357 [2024-11-19 09:29:32.362256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.357 [2024-11-19 09:29:32.362263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.357 [2024-11-19 09:29:32.362269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.357 [2024-11-19 09:29:32.374350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.357 [2024-11-19 09:29:32.374775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.357 [2024-11-19 09:29:32.374821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.357 [2024-11-19 09:29:32.374844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.357 [2024-11-19 09:29:32.375458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.357 [2024-11-19 09:29:32.375633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.357 [2024-11-19 09:29:32.375642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.357 [2024-11-19 09:29:32.375649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.357 [2024-11-19 09:29:32.375655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.357 [2024-11-19 09:29:32.387283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.357 [2024-11-19 09:29:32.387690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.357 [2024-11-19 09:29:32.387735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.357 [2024-11-19 09:29:32.387758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.357 [2024-11-19 09:29:32.388353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.357 [2024-11-19 09:29:32.388809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.357 [2024-11-19 09:29:32.388819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.357 [2024-11-19 09:29:32.388825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.357 [2024-11-19 09:29:32.388832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.357 [2024-11-19 09:29:32.402548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.357 [2024-11-19 09:29:32.403087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.357 [2024-11-19 09:29:32.403132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.357 [2024-11-19 09:29:32.403156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.357 [2024-11-19 09:29:32.403686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.357 [2024-11-19 09:29:32.403941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.357 [2024-11-19 09:29:32.403961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.357 [2024-11-19 09:29:32.403972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.357 [2024-11-19 09:29:32.403982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.618 [2024-11-19 09:29:32.415566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.618 [2024-11-19 09:29:32.416004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-19 09:29:32.416050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.618 [2024-11-19 09:29:32.416074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.618 [2024-11-19 09:29:32.416521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.618 [2024-11-19 09:29:32.416690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.618 [2024-11-19 09:29:32.416702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.618 [2024-11-19 09:29:32.416709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.618 [2024-11-19 09:29:32.416715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.618 [2024-11-19 09:29:32.428389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.618 [2024-11-19 09:29:32.428790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.618 [2024-11-19 09:29:32.428807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.618 [2024-11-19 09:29:32.428814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.618 [2024-11-19 09:29:32.428982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.618 [2024-11-19 09:29:32.429171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.618 [2024-11-19 09:29:32.429181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.618 [2024-11-19 09:29:32.429188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.618 [2024-11-19 09:29:32.429194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.618 [2024-11-19 09:29:32.441232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.618 [2024-11-19 09:29:32.441644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.441662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.441670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.441832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.442001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.442010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.442017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.442024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.454070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.454482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.454521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.454546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.455072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.455238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.455247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.455253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.455262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.466918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.467331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.467349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.467356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.467520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.467684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.467694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.467700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.467706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.479991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.480381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.480415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.480441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.481043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.481218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.481228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.481235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.481242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.492855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.493272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.493313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.493339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.493865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.494054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.494064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.494071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.494078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.505685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.506034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.506050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.506057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.506221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.506385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.506394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.506401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.506407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.518535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.518872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.518888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.518896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.519084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.519258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.519267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.619 [2024-11-19 09:29:32.519274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.619 [2024-11-19 09:29:32.519281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.619 [2024-11-19 09:29:32.531403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.619 [2024-11-19 09:29:32.531820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.619 [2024-11-19 09:29:32.531838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.619 [2024-11-19 09:29:32.531846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.619 [2024-11-19 09:29:32.532031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.619 [2024-11-19 09:29:32.532205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.619 [2024-11-19 09:29:32.532215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.532222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.532228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.544262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.544575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.544593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.544600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.544767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.544930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.544939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.544945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.544958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.557133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.557548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.557565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.557572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.557735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.557899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.557907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.557914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.557920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.570047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.570380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.570401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.570409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.570572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.570736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.570746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.570752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.570759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.582922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.583340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.583358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.583366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.583540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.583713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.583726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.583733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.583741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.596041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.596475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.596512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.596539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.597134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.597346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.597355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.597362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.597368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.609015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.609349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.609366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.609373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.609536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.609700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.609709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.609716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.609722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.621956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.622376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.622424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.622448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.623044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.620 [2024-11-19 09:29:32.623578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.620 [2024-11-19 09:29:32.623588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.620 [2024-11-19 09:29:32.623594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.620 [2024-11-19 09:29:32.623604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.620 [2024-11-19 09:29:32.634868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.620 [2024-11-19 09:29:32.635192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.620 [2024-11-19 09:29:32.635210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.620 [2024-11-19 09:29:32.635219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.620 [2024-11-19 09:29:32.635383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.621 [2024-11-19 09:29:32.635548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.621 [2024-11-19 09:29:32.635557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.621 [2024-11-19 09:29:32.635564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.621 [2024-11-19 09:29:32.635570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.621 [2024-11-19 09:29:32.647774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.621 [2024-11-19 09:29:32.648195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-19 09:29:32.648212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.621 [2024-11-19 09:29:32.648219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.621 [2024-11-19 09:29:32.648383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.621 [2024-11-19 09:29:32.648546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.621 [2024-11-19 09:29:32.648556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.621 [2024-11-19 09:29:32.648563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.621 [2024-11-19 09:29:32.648569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.621 [2024-11-19 09:29:32.660575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.621 [2024-11-19 09:29:32.660922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.621 [2024-11-19 09:29:32.660938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.621 [2024-11-19 09:29:32.660952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.621 [2024-11-19 09:29:32.661139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.621 [2024-11-19 09:29:32.661313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.621 [2024-11-19 09:29:32.661323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.621 [2024-11-19 09:29:32.661330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.621 [2024-11-19 09:29:32.661337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.881 [2024-11-19 09:29:32.673538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.881 [2024-11-19 09:29:32.673879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.881 [2024-11-19 09:29:32.673901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.881 [2024-11-19 09:29:32.673909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.881 [2024-11-19 09:29:32.674099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.881 [2024-11-19 09:29:32.674274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.881 [2024-11-19 09:29:32.674284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.881 [2024-11-19 09:29:32.674291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.881 [2024-11-19 09:29:32.674297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.881 [2024-11-19 09:29:32.686501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.881 [2024-11-19 09:29:32.686894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.881 [2024-11-19 09:29:32.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.881 [2024-11-19 09:29:32.686920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.881 [2024-11-19 09:29:32.687097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.881 [2024-11-19 09:29:32.687272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.687282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.687288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.687295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.699352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.699761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.699778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.699785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.699954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.700143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.700152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.700159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.700166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.712251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.712683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.712728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.712752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.713290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.713455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.713481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.713496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.713511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.727138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.727636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.727687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.727711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.728306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.728584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.728598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.728608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.728618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.740209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.740617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.740661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.740686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.741278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.741517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.741527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.741533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.741540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1265853 Killed "${NVMF_APP[@]}" "$@" 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1267044 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1267044 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1267044 ']' 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.882 [2024-11-19 09:29:32.753330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:31.882 [2024-11-19 09:29:32.753763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.753782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.753790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 09:29:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:31.882 [2024-11-19 09:29:32.753973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.754154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.754165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.754172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.754179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.766401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.766733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.766750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.766758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.766936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.767122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.767132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.767140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.767147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.779547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.779921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.779938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.779951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.780140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.780316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.780325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.780333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.780340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.792628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.882 [2024-11-19 09:29:32.792900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.882 [2024-11-19 09:29:32.792918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.882 [2024-11-19 09:29:32.792926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.882 [2024-11-19 09:29:32.793105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.882 [2024-11-19 09:29:32.793281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.882 [2024-11-19 09:29:32.793290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.882 [2024-11-19 09:29:32.793297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.882 [2024-11-19 09:29:32.793304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.882 [2024-11-19 09:29:32.799076] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:31.883 [2024-11-19 09:29:32.799116] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.883 [2024-11-19 09:29:32.805723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.806164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.806183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.806192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.806371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.806551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.806561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.806570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.806577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.818797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.819245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.819264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.819272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.819445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.819622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.819632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.819639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.819646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.831959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.832377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.832396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.832404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.832583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.832762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.832772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.832779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.832786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.845148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.845603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.845620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.845629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.845806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.845991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.846001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.846009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.846016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.858317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.858667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.858684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.858692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.858871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.859057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.859068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.859079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.859086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.871459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.871889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.871906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.871915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.872092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.872265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.872275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.872282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.872288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.879799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:31.883 [2024-11-19 09:29:32.884559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.884994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.885012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.885020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.885193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.885367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.885376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.885383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.885390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.897578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.898002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.898021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.898030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.898207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.898388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.898398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.898405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.898412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.910694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.883 [2024-11-19 09:29:32.911128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.883 [2024-11-19 09:29:32.911147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.883 [2024-11-19 09:29:32.911156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.883 [2024-11-19 09:29:32.911342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.883 [2024-11-19 09:29:32.911518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.883 [2024-11-19 09:29:32.911528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.883 [2024-11-19 09:29:32.911534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.883 [2024-11-19 09:29:32.911541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:31.883 [2024-11-19 09:29:32.922552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.883 [2024-11-19 09:29:32.922581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.883 [2024-11-19 09:29:32.922589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.883 [2024-11-19 09:29:32.922595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.883 [2024-11-19 09:29:32.922600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.883 [2024-11-19 09:29:32.923801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:31.884 [2024-11-19 09:29:32.924017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.884 [2024-11-19 09:29:32.924128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.884 [2024-11-19 09:29:32.924130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.884 [2024-11-19 09:29:32.924247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.884 [2024-11-19 09:29:32.924265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:31.884 [2024-11-19 09:29:32.924275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:31.884 [2024-11-19 09:29:32.924455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:31.884 [2024-11-19 09:29:32.924635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:31.884 [2024-11-19 09:29:32.924645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:31.884 [2024-11-19 09:29:32.924652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:31.884 [2024-11-19 09:29:32.924660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.143 [2024-11-19 09:29:32.936908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.143 [2024-11-19 09:29:32.937363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.143 [2024-11-19 09:29:32.937383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.143 [2024-11-19 09:29:32.937392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.143 [2024-11-19 09:29:32.937572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.143 [2024-11-19 09:29:32.937757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.143 [2024-11-19 09:29:32.937767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.143 [2024-11-19 09:29:32.937775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.143 [2024-11-19 09:29:32.937785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:32.950015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:32.950465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:32.950485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:32.950494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:32.950675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:32.950855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:32.950865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:32.950873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:32.950881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:32.963100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:32.963416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:32.963436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:32.963446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:32.963625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:32.963806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:32.963816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:32.963823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:32.963831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:32.976227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:32.976539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:32.976559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:32.976568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:32.976748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:32.976929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:32.976940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:32.976958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:32.976967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:32.989381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:32.989806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:32.989826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:32.989835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:32.990020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:32.990202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:32.990213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:32.990220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:32.990228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:33.002444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:33.002873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:33.002891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:33.002900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:33.003083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:33.003263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:33.003274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:33.003281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:33.003288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:33.015516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:33.015873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:33.015891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:33.015900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:33.016084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:33.016263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:33.016274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:33.016283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:33.016290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.144 [2024-11-19 09:29:33.028671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:33.029085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:33.029104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:33.029112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:33.029291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:33.029472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:33.029482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:33.029488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:33.029495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:33.041875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:33.042173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:33.042192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:33.042200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:33.042378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:33.042557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:33.042568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:33.042576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:33.042583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 [2024-11-19 09:29:33.054968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.144 [2024-11-19 09:29:33.055267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.144 [2024-11-19 09:29:33.055285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.144 [2024-11-19 09:29:33.055293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.144 [2024-11-19 09:29:33.055472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.144 [2024-11-19 09:29:33.055652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.144 [2024-11-19 09:29:33.055662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.144 [2024-11-19 09:29:33.055669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.144 [2024-11-19 09:29:33.055675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.144 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.145 [2024-11-19 09:29:33.061504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.145 [2024-11-19 09:29:33.068081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.145 [2024-11-19 09:29:33.068370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.145 [2024-11-19 09:29:33.068388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.145 [2024-11-19 09:29:33.068396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.145 [2024-11-19 09:29:33.068575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.145 [2024-11-19 09:29:33.068754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.145 [2024-11-19 09:29:33.068764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.145 [2024-11-19 09:29:33.068771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.145 [2024-11-19 09:29:33.068777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.145 [2024-11-19 09:29:33.081158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.145 [2024-11-19 09:29:33.081450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.145 [2024-11-19 09:29:33.081468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.145 [2024-11-19 09:29:33.081476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.145 [2024-11-19 09:29:33.081655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.145 [2024-11-19 09:29:33.081836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.145 [2024-11-19 09:29:33.081846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.145 [2024-11-19 09:29:33.081853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.145 [2024-11-19 09:29:33.081860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.145 [2024-11-19 09:29:33.094222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.145 [2024-11-19 09:29:33.094509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.145 [2024-11-19 09:29:33.094527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.145 [2024-11-19 09:29:33.094535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.145 [2024-11-19 09:29:33.094718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.145 [2024-11-19 09:29:33.094898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.145 [2024-11-19 09:29:33.094909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.145 [2024-11-19 09:29:33.094916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.145 [2024-11-19 09:29:33.094923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.145 Malloc0 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.145 [2024-11-19 09:29:33.107352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.145 [2024-11-19 09:29:33.107660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.145 [2024-11-19 09:29:33.107679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.145 [2024-11-19 09:29:33.107687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.145 [2024-11-19 09:29:33.107865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.145 [2024-11-19 09:29:33.108059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.145 [2024-11-19 09:29:33.108070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.145 [2024-11-19 09:29:33.108077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.145 [2024-11-19 09:29:33.108084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.145 4885.17 IOPS, 19.08 MiB/s [2024-11-19T08:29:33.204Z] 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.145 [2024-11-19 09:29:33.120451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.145 [2024-11-19 09:29:33.120884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.145 [2024-11-19 09:29:33.120903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9fa500 with addr=10.0.0.2, port=4420 00:27:32.145 [2024-11-19 09:29:33.120911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fa500 is same with the state(6) to be set 00:27:32.145 [2024-11-19 09:29:33.121095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa500 (9): Bad file descriptor 00:27:32.145 [2024-11-19 09:29:33.121276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:32.145 [2024-11-19 09:29:33.121286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:32.145 [2024-11-19 09:29:33.121294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:32.145 [2024-11-19 09:29:33.121301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:32.145 [2024-11-19 09:29:33.125528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.145 09:29:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1266116 00:27:32.145 [2024-11-19 09:29:33.133535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:32.145 [2024-11-19 09:29:33.162408] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:34.449 5678.14 IOPS, 22.18 MiB/s [2024-11-19T08:29:36.441Z] 6365.00 IOPS, 24.86 MiB/s [2024-11-19T08:29:37.372Z] 6901.11 IOPS, 26.96 MiB/s [2024-11-19T08:29:38.305Z] 7338.00 IOPS, 28.66 MiB/s [2024-11-19T08:29:39.238Z] 7679.09 IOPS, 30.00 MiB/s [2024-11-19T08:29:40.170Z] 7972.42 IOPS, 31.14 MiB/s [2024-11-19T08:29:41.586Z] 8215.54 IOPS, 32.09 MiB/s [2024-11-19T08:29:42.181Z] 8424.93 IOPS, 32.91 MiB/s 00:27:41.123 Latency(us) 00:27:41.123 [2024-11-19T08:29:42.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.123 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:41.123 Verification LBA range: start 0x0 length 0x4000 00:27:41.123 Nvme1n1 : 15.01 8610.13 33.63 10771.24 0.00 6584.23 658.92 20743.57 00:27:41.123 [2024-11-19T08:29:42.182Z] =================================================================================================================== 00:27:41.123 [2024-11-19T08:29:42.182Z] Total : 8610.13 33.63 10771.24 0.00 6584.23 658.92 20743.57 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.380 rmmod nvme_tcp 00:27:41.380 rmmod nvme_fabrics 00:27:41.380 rmmod nvme_keyring 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1267044 ']' 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1267044 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1267044 ']' 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1267044 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1267044 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1267044' 00:27:41.380 killing process with pid 1267044 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1267044 00:27:41.380 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1267044 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.639 09:29:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.175 00:27:44.175 real 0m25.969s 00:27:44.175 user 1m0.419s 00:27:44.175 sys 0m6.819s 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:44.175 ************************************ 00:27:44.175 END TEST nvmf_bdevperf 00:27:44.175 ************************************ 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.175 ************************************ 00:27:44.175 START TEST nvmf_target_disconnect 00:27:44.175 ************************************ 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:44.175 * Looking for test storage... 00:27:44.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.175 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:44.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.175 --rc genhtml_branch_coverage=1 00:27:44.175 --rc genhtml_function_coverage=1 00:27:44.176 --rc genhtml_legend=1 00:27:44.176 --rc geninfo_all_blocks=1 00:27:44.176 --rc geninfo_unexecuted_blocks=1 00:27:44.176 00:27:44.176 ' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.176 --rc genhtml_branch_coverage=1 00:27:44.176 --rc genhtml_function_coverage=1 00:27:44.176 --rc genhtml_legend=1 00:27:44.176 --rc geninfo_all_blocks=1 00:27:44.176 --rc geninfo_unexecuted_blocks=1 00:27:44.176 00:27:44.176 ' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.176 --rc genhtml_branch_coverage=1 00:27:44.176 --rc genhtml_function_coverage=1 00:27:44.176 --rc genhtml_legend=1 00:27:44.176 --rc geninfo_all_blocks=1 00:27:44.176 --rc geninfo_unexecuted_blocks=1 00:27:44.176 00:27:44.176 ' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:44.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.176 --rc genhtml_branch_coverage=1 00:27:44.176 --rc genhtml_function_coverage=1 00:27:44.176 --rc genhtml_legend=1 00:27:44.176 --rc geninfo_all_blocks=1 00:27:44.176 --rc geninfo_unexecuted_blocks=1 00:27:44.176 00:27:44.176 ' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.176 09:29:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:50.745 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:50.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:50.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:50.746 Found net devices under 0000:86:00.0: cvl_0_0 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:50.746 Found net devices under 0000:86:00.1: cvl_0_1 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:50.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:27:50.746 00:27:50.746 --- 10.0.0.2 ping statistics --- 00:27:50.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.746 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:27:50.746 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:27:50.747 00:27:50.747 --- 10.0.0.1 ping statistics --- 00:27:50.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.747 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 ************************************ 00:27:50.747 START TEST nvmf_target_disconnect_tc1 00:27:50.747 ************************************ 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:50.747 09:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:50.747 [2024-11-19 09:29:51.031809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.747 [2024-11-19 09:29:51.031925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a70ab0 with addr=10.0.0.2, port=4420 00:27:50.747 [2024-11-19 09:29:51.031988] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:50.747 [2024-11-19 09:29:51.032025] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:50.747 [2024-11-19 09:29:51.032045] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:50.747 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:50.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:50.747 Initializing NVMe Controllers 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:50.747 00:27:50.747 real 0m0.121s 00:27:50.747 user 0m0.053s 00:27:50.747 sys 0m0.068s 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 ************************************ 00:27:50.747 END TEST nvmf_target_disconnect_tc1 00:27:50.747 ************************************ 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 ************************************ 00:27:50.747 START TEST nvmf_target_disconnect_tc2 00:27:50.747 ************************************ 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1272217 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1272217 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1272217 ']' 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 [2024-11-19 09:29:51.176162] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:50.747 [2024-11-19 09:29:51.176203] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.747 [2024-11-19 09:29:51.252746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.747 [2024-11-19 09:29:51.294637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.747 [2024-11-19 09:29:51.294674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.747 [2024-11-19 09:29:51.294682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.747 [2024-11-19 09:29:51.294688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.747 [2024-11-19 09:29:51.294693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.747 [2024-11-19 09:29:51.296340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:50.747 [2024-11-19 09:29:51.296444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:50.747 [2024-11-19 09:29:51.296550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:50.747 [2024-11-19 09:29:51.296551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.747 Malloc0 00:27:50.747 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 [2024-11-19 09:29:51.459387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 [2024-11-19 09:29:51.491682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1272245 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:50.748 09:29:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:52.662 09:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1272217 00:27:52.662 09:29:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 [2024-11-19 09:29:53.529820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 [2024-11-19 09:29:53.530038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Write completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.662 Read completed with error (sct=0, sc=8) 00:27:52.662 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 [2024-11-19 09:29:53.530233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Write completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 Read completed with error (sct=0, sc=8) 00:27:52.663 starting I/O failed 00:27:52.663 [2024-11-19 09:29:53.530430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:52.663 [2024-11-19 09:29:53.530609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.530632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.530745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.530761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.530858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.530870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.530958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.530970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.531843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.531871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.531983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.531995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.532081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.532092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.532188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.532200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.532300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.532312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.532426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.532459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.532636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.532669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.532921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.532964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.533108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.533141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.533326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.533358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.533479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.533512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.533768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.533801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.533985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.534019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.534144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.534155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.534779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.534801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.663 [2024-11-19 09:29:53.534943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.663 [2024-11-19 09:29:53.534961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.663 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.535042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.535052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.535217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.535243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.536553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.536579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.536817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.536830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.537864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.537875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.538957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.538968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.539914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.539998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.664 [2024-11-19 09:29:53.540854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.664 qpair failed and we were unable to recover it. 00:27:52.664 [2024-11-19 09:29:53.540936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.540958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.541941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.541957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.542857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.542872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.543961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.543976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.665 [2024-11-19 09:29:53.544829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.665 qpair failed and we were unable to recover it. 00:27:52.665 [2024-11-19 09:29:53.544968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.544983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.545970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.545985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.546831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.546845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.547924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.547938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.666 qpair failed and we were unable to recover it. 00:27:52.666 [2024-11-19 09:29:53.548662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.666 [2024-11-19 09:29:53.548676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.548743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.548757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.548826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.548839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.548924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.548939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.549970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.549986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.550935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.550955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.551917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.551960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.552156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.552188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.552366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.552397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.552530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.667 [2024-11-19 09:29:53.552545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.667 qpair failed and we were unable to recover it. 00:27:52.667 [2024-11-19 09:29:53.552684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.552700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.552857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.552873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.553944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.553991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.554108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.554140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.554317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.554349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.554476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.554495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.554715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.554746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.554873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.554904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.555043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.555275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.555306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.555518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.555551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.555668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.555699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.555832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.555864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.556110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.556142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.556325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.556357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.556533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.556551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.556696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.556715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.556818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.556837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.556928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.556956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.557109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.557129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.557280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.557298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.557372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.557389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.557545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.557564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.557648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.557690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.557931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.557974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.558172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.558204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.558317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.558349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.558523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.558554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.558680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.558700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.558798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.558817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.668 qpair failed and we were unable to recover it. 00:27:52.668 [2024-11-19 09:29:53.558898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.668 [2024-11-19 09:29:53.558917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.559031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.559050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.559284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.559304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.559416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.559448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.559555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.559586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.559691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.559723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.559829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.559861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.560159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.560192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.560297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.560317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.560410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.560429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.560521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.560540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.560716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.560749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.560876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.560907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.561031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.561070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.561325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.561344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.561525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.561557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.561693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.561723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.561826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.561858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.561988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.562137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.562277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.562451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.562664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.562789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.562896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.562917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.563897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.563929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.564133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.564168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.564307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.564338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.564475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.564507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.564610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.564642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.669 [2024-11-19 09:29:53.564764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.669 [2024-11-19 09:29:53.564795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.669 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.566209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.566264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.566494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.566529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.566708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.566742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.566868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.566908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.567039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.567073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.567177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.567209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.567388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.567420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.567598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.567631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.567798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.567830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.568089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.568246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.568278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.568468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.568501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.568622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.568654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.568774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.568805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.568918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.568958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.569126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.569159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.569326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.569358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.569491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.569524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.569636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.569668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.569799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.569832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.569940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.569981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.570168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.570202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.570332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.570364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.570473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.570507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.570679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.570711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.570839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.570871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.570991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.571024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.571208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.571241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.571343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.571375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.571480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.571623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.571662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.571833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.571866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.571978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.572012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.572118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.572151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.572264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.572477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.572509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.572628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.572660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.572834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.670 [2024-11-19 09:29:53.572866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.670 qpair failed and we were unable to recover it. 00:27:52.670 [2024-11-19 09:29:53.573012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.573048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.573164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.573195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.573303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.573335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.573448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.573478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.573575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.573605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.573780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.573809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.573978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.574137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.574265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.574390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.574515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.574743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.574892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.574921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.575107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.575138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.575237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.575266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.575366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.575395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.575577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.575606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.575770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.575800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.575930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.575970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.576121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.576227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.576260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.576371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.576403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.576519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.576548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.576789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.576822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.576935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.577046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.577157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.577189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.577320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.577353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.577471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.577503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.577620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.577653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.577770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.671 [2024-11-19 09:29:53.577802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.671 qpair failed and we were unable to recover it. 00:27:52.671 [2024-11-19 09:29:53.577904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.577936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.578125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.578158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.578300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.578333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.578455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.578498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.578773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.578805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.578977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.579104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.579238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.579362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.579573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.579720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.579926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.579962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.580067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.580097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.580277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.580306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.580430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.580458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.580649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.580678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.580780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.580809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.580980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.581011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.581115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.581144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.581321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.581350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.581453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.581481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.581690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.581721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.581833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.581865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.581989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.582022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.582235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.582267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.582442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.582514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.582647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.582684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.582864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.582897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.583103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.583138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.583332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.583364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.583530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.583598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.583735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.583771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.583960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.583992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.584111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.584142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.584252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.584283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.584401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.584432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.584605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.584637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.672 [2024-11-19 09:29:53.584816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.672 [2024-11-19 09:29:53.584848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.672 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.584973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.585115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.585250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.585461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.585595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.585798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.585942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.585984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.586174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.586207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.586312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.586342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.586546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.586577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.586756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.586787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.587039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.587072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.587203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.587235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.587427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.587459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.587648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.587681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.587803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.587835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.587969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.588004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.588112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.588143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.588315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.588346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.588471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.588503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.588699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.588732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.588842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.588873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.588998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.589033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.590373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.590428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.590715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.590749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.590938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.590983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.591241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.591273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.591375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.591406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.591508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.591776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.591808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.591922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.591965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.592089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.592120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.592249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.592290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.592405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.592436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.594175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.594231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.673 qpair failed and we were unable to recover it. 00:27:52.673 [2024-11-19 09:29:53.594460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.673 [2024-11-19 09:29:53.594494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.594763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.594797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.595090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.595242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.595380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.595536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.595688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.595860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.595972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.596006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.596191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.596224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.596405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.596437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.596551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.596582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.596748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.596781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.596907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.596940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.597074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.597280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.597313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.597439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.597472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.597592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.597623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.597755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.597787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.598013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.598050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.598162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.598195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.598386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.598418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.598538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.598569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.598756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.598788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.598916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.598957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.599069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.599101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.599391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.599422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.599536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.599568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.599745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.599777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.599896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.599928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.600123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.600154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.600328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.600359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.600462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.600493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.600611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.600643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.600751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.600782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.600902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.600935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.674 [2024-11-19 09:29:53.601135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.674 [2024-11-19 09:29:53.601168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.674 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.601406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.601444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.601554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.601586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.601715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.601746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.601968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.602002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.602188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.602219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.602359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.602391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.602528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.602560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.602672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.602705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.602823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.602855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.602968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.603003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.603173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.603204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.603384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.603416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.603594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.603626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.603812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.603845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.603970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.604004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.604193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.604225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.604344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.604376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.604613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.604645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.604822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.604855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.605051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.605085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.605201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.605233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.605352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.605383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.605501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.605533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.605642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.605673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.605853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.605884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.606003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.606036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.606236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.606268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.606448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.606479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.606656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.606686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.606794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.606822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.606997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.607026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.607210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.607239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.607336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.607366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.607464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.607493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.607669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.607699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.607817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.607846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.608023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.675 [2024-11-19 09:29:53.608053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.675 qpair failed and we were unable to recover it. 00:27:52.675 [2024-11-19 09:29:53.608192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.608234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.608337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.608368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.608471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.608502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.608681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.608719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.608923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.608960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.609061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.609091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.609298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.609326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.609431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.609460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.609636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.609669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.609849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.609880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.610049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.610085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.610210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.610241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.610425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.610456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.610577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.610607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.610801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.610831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.610939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.610976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.611101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.611130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.611330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.611360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.611480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.611509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.611684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.611713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.611898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.611930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.612055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.612087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.612202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.612233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.612427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.612459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.612572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.612604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.612738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.612778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.612983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.613033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.613243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.613288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.613524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.613573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.613729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.613773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.613935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.613988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.676 [2024-11-19 09:29:53.614124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.676 [2024-11-19 09:29:53.614165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.676 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.614453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.614490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.614623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.614656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.614829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.614861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.615037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.615074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.615264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.615296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.615475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.615506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.615768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.615800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.615982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.616017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.616125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.616155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.616392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.616425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.616618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.616651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.616759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.616798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.617013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.617049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.617221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.617254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.617360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.617393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.617508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.617541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.617653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.617686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.617858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.617891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.618141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.618175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.618357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.618389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.618508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.618540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.618729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.618761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.618871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.618904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.619032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.619064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.619246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.619278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.619414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.619447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.619565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.619597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.619777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.619810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.619921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.619966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.620165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.620197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.620370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.620402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.620589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.620621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.620809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.620841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.620975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.621009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.621208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.621240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.621407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.621613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.677 [2024-11-19 09:29:53.621645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.677 qpair failed and we were unable to recover it. 00:27:52.677 [2024-11-19 09:29:53.621745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.621777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.621920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.621966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.622169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.622201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.622338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.622369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.622490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.622522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.622762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.622794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.622908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.622939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.623181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.623213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.623457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.623490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.623661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.623694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.623811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.623844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.624022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.624057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.624244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.624276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.624387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.624419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.624601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.624639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.624819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.624852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.625093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.625128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.625313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.625344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.625446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.625478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.625597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.625629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.625749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.625781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.625962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.625997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.626169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.626377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.626409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.626650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.626682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.626786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.626818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.626938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.627011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.627215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.627248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.627498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.627531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.627646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.627677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.627779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.627810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.628051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.628194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.628225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.628414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.628734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.628766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.628942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.628992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.629181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.629212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.678 [2024-11-19 09:29:53.629447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.678 [2024-11-19 09:29:53.629478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.678 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.629721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.629753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.629935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.629982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.630092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.630126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.630266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.630299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.630478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.630508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.630719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.630751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.630861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.630893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.631193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.631227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.631484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.631516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.631719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.631751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.631967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.632002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.632201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.632234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.632422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.632454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.632655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.632686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.632868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.632900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.633099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.633133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.633268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.633311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.633554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.633587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.633771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.633802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.633978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.634013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.634124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.634156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.634334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.634366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.634480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.634511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.634695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.634726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.635003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.635038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.635222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.635255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.635366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.635397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.635524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.635556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.635673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.635705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.635920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.635961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.636141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.636173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.636344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.636375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.636498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.636530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.636710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.636742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.636999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.637031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.637205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.637236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.679 [2024-11-19 09:29:53.637475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.679 [2024-11-19 09:29:53.637507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.679 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.637743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.637774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.637906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.637938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.638071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.638103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.638282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.638313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.638572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.638603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.638780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.638812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.639001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.639035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.639147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.639179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.639308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.639340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.639570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.639601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.639841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.639872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.640045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.640078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.640198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.640230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.640470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.640501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.640684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.640716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.640895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.640926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.641133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.641166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.641293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.641323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.641560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.641591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.641826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.641864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.642142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.642175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.642290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.642320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.642508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.642540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.642746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.642776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.642895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.642926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.643122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.643154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.643261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.643293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.643417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.643448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.643567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.643598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.643713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.643744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.643917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.643961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.644232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.644263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.644527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.644558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.644782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.644814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.644960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.645125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.645285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.680 [2024-11-19 09:29:53.645317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.680 qpair failed and we were unable to recover it. 00:27:52.680 [2024-11-19 09:29:53.645527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.645560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.645665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.645696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.645968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.646004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.646271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.646304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.646429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.646461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.646648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.646681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.646853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.646885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.647076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.647109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.647225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.647257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.647416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.647489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.647631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.647668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.647841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.647874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.648134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.648169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.648429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.648462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.648599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.648632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.648842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.648873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.649114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.649147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.649265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.649297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.649536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.649567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.649827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.649859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.649974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.650008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.650276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.650308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.650501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.650533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.650671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.650703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.650884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.650916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.651189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.651222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.651404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.651436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.651679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.651712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.651900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.651932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.652138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.652170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.652357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.652390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.652512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.652545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.652663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.681 [2024-11-19 09:29:53.652696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.681 qpair failed and we were unable to recover it. 00:27:52.681 [2024-11-19 09:29:53.652871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.652903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.653028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.653062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.653190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.653221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.653354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.653391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.653654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.653685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.653851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.653884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.654059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.654093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.654354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.654386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.654495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.654526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.654708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.654740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.654977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.655010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.655245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.655277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.655398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.655429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.655635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.655668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.655787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.655818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.655964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.655997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.656232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.656269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.656529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.656562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.656671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.656702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.656877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.656910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.657042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.657074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.657261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.657293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.657422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.657456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.657558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.657589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.657758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.657791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.657899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.657930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.658054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.658085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.658217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.658249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.658484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.658516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.658698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.658729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.658849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.658880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.659016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.659050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.659230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.659261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.659523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.659555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.659806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.659837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.660017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.660050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.660183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.660215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.660336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.682 [2024-11-19 09:29:53.660369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.682 qpair failed and we were unable to recover it. 00:27:52.682 [2024-11-19 09:29:53.660568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.660600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.660778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.660810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.660988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.661021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.661141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.661173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.661384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.661416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.661654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.661691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.661867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.661899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.662038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.662070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.662270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.662302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.662468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.662500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.662691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.662723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.662932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.662984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.663118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.663151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.663267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.663298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.663474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.663506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.663690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.663721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.663847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.663878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.664087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.664120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.664239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.664272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.664466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.664497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.664626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.664657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.664828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.664859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.664981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.665014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.665251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.665283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.665399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.665433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.665717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.665748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.665925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.666088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.666121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.666310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.666341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.666463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.666495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.666695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.666726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.666900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.666932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.667141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.667175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.667304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.667337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.667506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.667539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.667732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.667763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.667970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.668004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.683 [2024-11-19 09:29:53.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.683 [2024-11-19 09:29:53.668222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.683 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.668458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.668489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.668663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.668694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.668890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.668922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.669220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.669252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.669519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.669551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.669729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.669761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.669876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.669908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.670088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.670127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.670318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.670350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.670535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.670762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.670794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.671005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.671038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.671216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.671249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.671427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.671459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.671641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.671673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.671805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.671837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.672009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.672043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.672215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.672247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.672433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.672466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.672706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.672739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.672843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.672874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.673057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.673091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.673206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.673238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.673411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.673443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.673611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.673643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.673755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.673788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.674003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.674035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.674222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.674254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.674442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.674475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.674584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.674614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.674790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.674822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.675057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.675091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.675327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.675360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.675602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.675634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.675753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.675785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.676034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.676068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.676240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.684 [2024-11-19 09:29:53.676272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.684 qpair failed and we were unable to recover it. 00:27:52.684 [2024-11-19 09:29:53.676505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.676536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.676652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.676683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.676866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.676899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.677015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.677047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.677214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.677245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.677419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.677452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.677637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.677668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.677908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.677941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.678144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.678176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.678295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.678327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.678529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.678567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.678670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.678702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.678820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.678853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.679028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.679061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.679260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.679291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.679478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.679510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.679695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.679726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.680015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.680179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.680210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.680464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.680496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.680625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.680657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.680846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.680879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.681149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.681182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.681312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.681343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.681474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.681506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.681713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.681745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.681917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.681958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.682161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.682192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.682368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.682401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.682540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.682571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.682691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.682724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.682900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.682931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.683124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.685 [2024-11-19 09:29:53.683155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.685 qpair failed and we were unable to recover it. 00:27:52.685 [2024-11-19 09:29:53.683335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.683367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.683489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.683522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.683709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.683741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.683921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.683965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.684144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.684176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.684412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.684443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.684555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.684588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.684784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.684816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.684998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.685031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.685270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.685302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.685481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.685514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.685695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.685728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.685912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.685944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.686058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.686089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.686262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.686294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.686428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.686459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.686634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.686666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.686836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.686874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.687045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.687078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.687248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.687281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.687496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.687528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.687764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.687796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.688008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.688041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.688245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.688278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.688455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.688487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.688598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.688630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.688870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.688902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.689036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.689070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.689257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.689288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.689522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.689553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.689789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.689821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.689961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.689995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.690177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.690209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.690388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.690420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.690592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.690622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.690803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.690834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.691028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.691062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.686 [2024-11-19 09:29:53.691238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.686 [2024-11-19 09:29:53.691270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.686 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.691439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.691472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.691661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.691691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.691971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.692004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.692192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.692223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.692439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.692469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.692597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.692630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.692828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.692860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.693037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.693069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.693241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.693274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.693539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.693571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.693738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.693968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.694001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.694129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.694160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.694266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.694298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.694427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.694458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.694695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.694726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.694897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.694929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.695177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.695208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.695315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.695346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.695473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.695510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.695665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.695837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.695868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.696050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.696083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.696207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.696240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.696407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.696438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.696606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.696639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.696850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.696882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.696991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.697025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.697148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.697180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.697449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.697481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.697671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.697703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.697940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.697984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.698224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.698256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.698391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.698423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.698545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.698593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.698793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.699058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.699098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.687 qpair failed and we were unable to recover it. 00:27:52.687 [2024-11-19 09:29:53.699232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.687 [2024-11-19 09:29:53.699266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.699395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.699427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.699610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.699643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.699813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.699846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.700021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.700055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.700195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.700227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.700397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.700429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.700535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.700566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.700733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.700777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.700992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.701041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.701188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.701221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.701418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.701451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.701569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.701601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.701843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.701874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.702110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.702142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.702420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.702451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.702586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.702617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.688 qpair failed and we were unable to recover it. 00:27:52.688 [2024-11-19 09:29:53.702876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.688 [2024-11-19 09:29:53.702921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.703197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.703249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.703405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.703461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.703748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.703806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.704078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.704126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.704403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.704447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.704732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.704766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.704886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.704921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.705133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.705177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.705315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.705355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.705481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.705515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.705635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.705684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.705888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.705926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.706183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.706217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.706388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.706424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.706663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.706696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.706876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.706908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.707103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.707136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.707338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.707369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.707614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.707648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.707837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.707868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.708061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.708097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.708284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.708317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.967 [2024-11-19 09:29:53.708509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.967 [2024-11-19 09:29:53.708542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.967 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.708807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.708839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.708968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.709003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.709121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.709153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.709367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.709399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.709568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.709798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.709830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.710005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.710039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.710172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.710205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.710396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.710427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.710600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.710633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.710841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.710873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.711010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.711043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.711213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.711245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.711518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.711550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.711727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.711758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.712018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.712051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.712219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.712251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.712449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.712480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.712598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.712631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.712819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.712852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.713090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.713124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.713382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.713420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.713601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.713631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.713833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.713865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.714101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.714134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.714370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.714402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.714601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.714633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.714759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.714792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.714985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.715146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.715180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.715361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.715392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.715565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.715598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.715769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.715801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.715981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.716014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.716249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.716281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.716418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.716450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.716616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.968 [2024-11-19 09:29:53.716649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.968 qpair failed and we were unable to recover it. 00:27:52.968 [2024-11-19 09:29:53.716783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.716815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.717013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.717045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.717229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.717260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.717375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.717405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.717656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.717687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.717922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.717963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.718149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.718181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.718310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.718341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.718449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.718480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.718653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.718686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.718787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.718818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.719045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.719118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.719426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.719600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.719633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.719842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.719875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.720050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.720085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.720200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.720233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.720413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.720446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.720709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.720740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.721016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.721051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.721162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.721195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.721297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.721329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.721506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.721539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.721717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.721749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.721917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.721969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.722214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.722246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.722427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.722459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.722648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.722681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.722852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.722885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.723006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.723039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.723240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.723274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.723465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.723497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.723622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.723654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.723916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.723958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.724131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.724164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.724333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.724365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.724600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.969 [2024-11-19 09:29:53.724633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.969 qpair failed and we were unable to recover it. 00:27:52.969 [2024-11-19 09:29:53.724826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.724859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.725072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.725107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.725227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.725260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.725450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.725486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.725667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.725698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.725968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.726003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.726123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.726155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.726269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.726301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.726571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.726604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.726841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.726873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.727111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.727145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.727316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.727349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.727517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.727549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.727677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.727707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.727959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.727994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.728102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.728131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.728302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.728331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.728503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.728531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.728794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.728823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.728968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.729001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.729191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.729220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.729390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.729419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.729627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.729657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.729788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.729816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.730053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.730084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.730201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.730230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.730351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.730380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.730628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.730663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.730838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.730867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.731050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.731080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.731266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.731297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.731506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.731536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.731810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.731840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.732097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.732129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.732261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.732291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.732496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.732526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.733465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.733511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.970 [2024-11-19 09:29:53.733764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.970 [2024-11-19 09:29:53.733798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.970 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.734009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.734043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.734338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.734523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.734556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.734692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.734725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.734996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.735028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.735215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.735247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.735438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.735471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.735640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.735670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.735798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.735829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.735957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.735991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.736176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.736209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.736452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.736483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.736674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.736706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.736967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.737000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.737182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.737213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.737390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.737422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.737659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.737733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.738000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.738036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.738166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.738199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.738374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.738406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.738649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.738681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.738792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.738824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.738943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.738985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.739235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.739268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.739448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.739479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.739601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.739632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.739821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.739854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.740161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.740193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.740378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.740410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.740534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.740577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.740819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.740851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.741044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.741078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.741265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.741296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.971 qpair failed and we were unable to recover it. 00:27:52.971 [2024-11-19 09:29:53.741552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.971 [2024-11-19 09:29:53.741584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.741771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.741801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.742080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.742113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.742296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.742326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.742582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.742614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.742805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.742836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.742967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.743182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.743214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.743323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.743352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.743593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.743626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.743768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.743801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.743919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.743958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.744168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.744199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.744436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.744469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.744647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.744683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.744856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.744886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.745077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.745110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.745231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.745262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.745394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.745424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.745536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.745567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.745672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.745705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.745835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.745866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.746046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.746080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.746273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.746519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.746552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.746817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.746850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.746968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.747002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.747271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.747304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.747446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.747477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.747658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.747691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.747960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.747995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.748122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.748154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.748339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.748373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.748490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.748522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.748655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.972 [2024-11-19 09:29:53.748686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.972 qpair failed and we were unable to recover it. 00:27:52.972 [2024-11-19 09:29:53.748883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.748914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.749121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.749163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.749277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.749309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.749428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.749460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.749641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.749672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.749782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.749813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.750003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.750037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.750211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.750243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.750428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.750460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.750713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.750853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.750885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.751014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.751047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.751174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.751205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.751360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.751392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.751528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.751560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.751680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.751712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.751845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.751876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.752127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.752159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.752343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.752377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.752558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.752590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.752768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.752800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.753088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.753123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.753245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.753278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.753456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.753488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.753617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.753649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.753760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.753792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.754035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.754069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.754245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.754278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.754458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.754530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.754734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.754772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.754942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.755147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.755181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.755364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.755395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.755592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.755624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.755800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.755831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.756074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.756108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.756240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.973 [2024-11-19 09:29:53.756271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.973 qpair failed and we were unable to recover it. 00:27:52.973 [2024-11-19 09:29:53.756510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.756542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.756777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.756810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.757024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.757058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.757243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.757276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.757552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.757594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.757784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.757816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.757989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.758024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.758285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.758317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.758491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.758521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.758738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.758769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.758889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.758921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.759107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.759140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.759270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.759303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.759424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.759454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.759640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.759672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.759937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.759977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.760163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.760196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.760406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.760437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.760612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.760644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.760815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.760846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.760988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.761021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.761150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.761181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.761355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.761388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.761593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.761626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.761744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.761775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.762015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.762049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.762171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.762203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.762383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.762414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.762596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.762626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.762739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.762770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.762890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.762921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.763066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.763102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.763234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.763268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.763551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.763584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.763773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.763806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.764000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.764157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.974 [2024-11-19 09:29:53.764191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.974 qpair failed and we were unable to recover it. 00:27:52.974 [2024-11-19 09:29:53.764369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.764402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.764586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.764618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.764802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.764834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.764965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.764999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.765121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.765155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.765259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.765292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.765475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.765508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.765649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.765687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.765864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.765897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.766099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.766132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.766251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.766284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.766519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.766552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.766662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.766695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.766900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.766932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.767059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.767092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.767279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.767427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.767460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.767643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.767675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.767785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.767816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.768004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.768039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.768257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.768289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.768532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.768565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.768690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.768722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.768835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.768866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.769042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.769075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.769193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.769226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.769491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.769523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.769774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.769808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.769929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.769971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.770168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.770199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.770463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.770495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.770728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.770760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.770970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.771005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.771119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.771151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.771269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.771306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.771412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.771444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.771562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.771595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.771830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.975 [2024-11-19 09:29:53.771864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.975 qpair failed and we were unable to recover it. 00:27:52.975 [2024-11-19 09:29:53.772037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.772070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.772256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.772288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.772577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.772610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.772894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.772926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.773174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.773209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.773383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.773415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.773635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.773668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.773846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.773879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.774063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.774097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.774334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.774367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.774560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.774592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.774698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.774729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.774995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.775031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.775212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.775245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.775349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.775380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.775564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.775596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.775856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.775888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.776084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.776117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.776358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.776391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.776506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.776538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.776654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.776685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.776819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.776851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.777054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.777327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.777359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.777481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.777513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.777705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.777738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.777932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.777973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.778219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.778250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.778374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.778406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.778526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.778558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.778746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.778778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.976 [2024-11-19 09:29:53.778966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.976 [2024-11-19 09:29:53.779000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.976 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.779201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.779232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.779352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.779384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.779511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.779543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.779651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.779684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.779810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.779847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.779977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.780010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.780129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.780161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.780279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.780310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.780499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.780531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.780740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.780774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.780900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.780932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.781082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.781117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.781316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.781348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.781517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.781549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.781770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.781802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.781920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.781966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.782240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.782272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.782385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.782417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.782693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.782727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.782845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.782878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.783079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.783113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.783378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.783411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.783606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.783639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.783929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.783971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.784215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.784248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.784364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.784396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.784576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.784608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.784863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.784894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.785171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.785205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.785389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.785422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.785616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.785649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.785779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.785813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.786000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.786033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.786148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.786180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.786309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.786342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.977 [2024-11-19 09:29:53.786457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.977 [2024-11-19 09:29:53.786489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.977 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.786674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.786707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.786973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.787007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.787117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.787149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.787357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.787389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.787632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.787664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.787798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.787831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.787962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.787997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.788110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.788141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.788321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.788360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.788465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.788498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.788689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.788721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.788842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.788873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.789051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.789085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.789275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.789308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.789441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.789564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.789598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.789765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.789796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.789975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.790007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.790177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.790210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.790393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.790425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.790556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.790588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.790737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.790770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.790964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.791000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.791226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.791420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.791453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.791642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.791674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.791783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.791816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.792004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.792039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.792207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.792240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.792365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.792396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.792576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.792609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.792791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.792825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.793012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.793045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.793165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.793197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.793402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.978 [2024-11-19 09:29:53.793435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.978 qpair failed and we were unable to recover it. 00:27:52.978 [2024-11-19 09:29:53.793707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.793741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.793933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.793975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.794162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.794197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.794321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.794354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.794541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.794574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.794761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.794794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.794905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.794936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.795109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.795142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.795259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.795293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.795516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.795548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.795669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.795702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.795876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.795908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.796111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.796144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.796328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.796366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.796470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.796502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.796668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.796701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.796896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.796928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.797046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.797079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.797182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.797216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.797336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.797368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.797538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.797572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.797748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.797779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.797969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.798003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.798267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.798300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.798432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.798464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.798703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.798735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.798985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.799018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.799250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.799283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.799475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.799507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.799687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.799720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.799938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.799979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.800093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.800127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.800237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.800270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.800391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.800423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.800633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.979 [2024-11-19 09:29:53.800666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.979 qpair failed and we were unable to recover it. 00:27:52.979 [2024-11-19 09:29:53.800904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.800936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.801100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.801229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.801262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.801375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.801407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.801528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.801561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.801765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.801808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.801927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.801972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.802172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.802204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.802331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.802364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.802606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.802638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.802749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.802782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.802904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.802935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.803067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.803099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.803284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.803317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.803519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.803551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.803721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.803753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.803989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.804023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.804195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.804228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.804409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.804448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.804655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.804687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.804863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.804897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.805113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.805147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.805264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.805297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.805539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.805571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.805746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.805778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.805906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.805939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.806154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.806187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.806360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.806392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.806587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.806620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.806809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.806840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.807017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.807050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.807203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.807235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.807455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.807638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.807672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.807877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.807910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.808044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.980 [2024-11-19 09:29:53.808079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.980 qpair failed and we were unable to recover it. 00:27:52.980 [2024-11-19 09:29:53.808266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.808298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.808400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.808431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.808608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.808640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.808818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.808851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.808994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.809030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.809221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.809254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.809354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.809387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.809490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.809523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.809689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.809734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.809854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.809888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.810024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.810058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.810180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.810213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.810407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.810440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.810709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.810741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.810985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.811019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.811208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.811241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.811345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.811378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.811499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.811532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.811789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.811821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.812088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.812121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.812309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.812342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.812526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.812557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.812755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.812793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.812988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.813022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.813139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.813172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.813342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.813374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.813562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.813606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.813783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.813816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.813938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.813981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.814173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.814206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.981 [2024-11-19 09:29:53.814377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.981 [2024-11-19 09:29:53.814410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.981 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.814526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.814558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.814733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.814764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.814937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.814982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.815248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.815279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.815492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.815523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.815652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.815686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.815788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.815821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.816063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.816096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.816361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.816660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.816693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.816889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.816922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.817045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.817078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.817252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.817285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.817470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.817503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.817655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.817686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.817800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.817833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.818123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.818157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.818276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.818309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.818573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.818606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.818728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.818761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.818935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.818977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.819159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.819193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.819323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.819355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.819547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.819579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.819706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.819739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.819979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.820015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.820191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.820224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.820355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.820387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.820559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.820591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.820709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.820741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.820870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.820903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.821038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.821078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.821246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.821279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.821450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.821483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.821594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.821626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.821737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.982 [2024-11-19 09:29:53.821768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.982 qpair failed and we were unable to recover it. 00:27:52.982 [2024-11-19 09:29:53.821889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.821921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.822101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.822134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.822241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.822271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.822458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.822491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.822654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.822687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.822827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.822859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.822967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.823001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.823193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.823226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.823464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.823496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.823693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.823726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.823991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.824024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.824202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.824234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.824345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.824378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.824561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.824593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.824773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.824805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.824973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.825006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.825112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.825144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.825408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.825441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.825726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.825759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.826023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.826056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.826233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.826265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.826448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.826480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.826643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.826675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.826855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.826886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.826996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.827028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.827242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.827274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.827532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.827564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.827804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.827836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.827973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.828006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.828201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.828233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.828416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.828449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.828629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.828661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.828916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.828969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.829172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.829205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.829376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.829409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.829527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.829564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.983 [2024-11-19 09:29:53.829731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.983 [2024-11-19 09:29:53.829763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.983 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.829912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.829943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.830083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.830114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.830218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.830251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.830370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.830401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.830556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.830706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.830738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.830907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.830938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.831152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.831185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.831372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.831403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.831606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.831639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.831820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.831851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.832035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.832068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.832204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.832236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.832475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.832508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.832625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.832657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.832844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.832877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.833066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.833101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.833232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.833264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.833374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.833406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.833525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.833557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.833733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.833766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.833880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.833913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.834103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.834138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.834401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.834434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.834673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.834706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.835021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.835095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.835259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.835295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.835413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.835447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.835642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.835675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.835797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.835830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.836058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.836094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.836268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.836302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.836478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.836510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.836623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.836657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.984 [2024-11-19 09:29:53.836785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.984 [2024-11-19 09:29:53.836816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.984 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.837000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.837036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.837248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.837283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.837449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.837480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.837719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.837751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.837891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.837923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.838057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.838090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.838397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.838506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.838538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.838707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.838739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.838924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.838965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.839076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.839109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.839292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.839324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.839527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.839560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.839734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.839767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.839977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.840012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.840150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.840183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.840300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.840332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.840534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.840567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.840803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.840837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.841009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.841042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.841213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.841245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.841434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.841467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.841654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.841687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.841970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.842005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.842192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.842225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.842350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.842383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.842650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.842683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.842887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.842919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.843176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.843209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.843444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.843476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.843713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.843746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.843990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.844025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.844210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.844243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.844500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.844533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.844662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.844694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.844815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.985 [2024-11-19 09:29:53.844848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.985 qpair failed and we were unable to recover it. 00:27:52.985 [2024-11-19 09:29:53.844982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.845015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.845133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.845166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.845403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.845435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.845633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.845664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.845836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.845867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.846050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.846084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.846292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.846323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.846437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.846470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.846658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.846696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.846879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.846911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.847046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.847079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.847285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.847319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.847438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.847470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.847590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.847622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.847728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.847761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.847962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.847995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.848109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.848141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.848312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.848345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.848523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.848554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.848789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.848820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.849040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.849073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.849250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.849282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.849394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.849427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.849599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.849630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.849742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.849963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.849997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.850237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.850270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.850393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.850425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.850612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.850644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.850833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.850865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.986 [2024-11-19 09:29:53.851065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.986 [2024-11-19 09:29:53.851097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.986 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.851275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.851307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.851541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.851582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.851761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.851793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.851974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.852008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.852234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.852364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.852397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.852569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.852601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.852805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.852836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.853046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.853081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.853253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.853285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.853461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.853494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.853692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.853726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.853968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.854001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.854119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.854152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.854336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.854369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.854551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.854583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.854751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.854783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.854960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.854993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.855260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.855293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.855531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.855564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.855749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.855780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.855954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.855988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.856107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.856140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.856314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.856346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.856609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.856641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.856812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.856844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.857052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.857087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.857265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.857297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.857487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.857521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.857641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.857674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.857862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.857896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.858087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.858121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.858340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.858373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.858610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.858644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.858762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.858794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.858966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.859000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.987 [2024-11-19 09:29:53.859124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.987 [2024-11-19 09:29:53.859157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.987 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.859353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.859384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.859554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.859586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.859788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.859821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.860027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.860060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.860198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.860231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.860435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.860467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.860720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.860752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.860938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.860993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.861121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.861153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.861276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.861309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.861546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.861577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.861711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.861743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.861851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.861883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.862061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.862095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.862344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.862376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.862485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.862517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.862635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.862666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.862835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.862867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.862982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.863015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.863213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.863244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.863375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.863407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.863668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.863701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.863894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.863927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.864077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.864110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.864225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.864257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.864437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.864470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.864646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.864679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.864856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.864887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.865083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.865116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.865317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.865351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.865464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.865496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.865734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.865767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.866001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.866034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.866162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.866194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.866310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.866343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.988 [2024-11-19 09:29:53.866577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.988 [2024-11-19 09:29:53.866615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.988 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.866720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.866752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.866991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.867023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.867228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.867260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.867438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.867470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.867593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.867625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.867885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.867918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.868118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.868149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.868440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.868472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.868701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.868889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.868920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.869128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.869162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.869425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.869457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.869625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.869657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.869918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.869961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.870233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.870266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.870448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.870480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.870676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.870708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.870884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.870917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.871066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.871097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.871279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.871312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.871430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.871461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.871635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.871667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.871867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.871899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.872084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.872116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.872234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.872266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.872385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.872417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.872604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.872640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.872828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.872861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.872982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.873016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.873189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.873222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.873345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.873377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.873588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.873620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.873881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.873913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.874132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.874166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.874404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.874437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.989 qpair failed and we were unable to recover it. 00:27:52.989 [2024-11-19 09:29:53.874618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.989 [2024-11-19 09:29:53.874650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.874846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.874877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.875010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.875045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.875233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.875265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.875470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.875502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.875635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.875668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.875789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.875822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.876000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.876033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.876215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.876247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.876347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.876379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.876557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.876590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.876781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.876813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.877020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.877053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.877167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.877199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.877310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.877341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.877524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.877556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.877741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.877772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.877877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.877909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.878039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.878078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.878280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.878313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.878427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.878460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.878645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.878678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.878968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.879006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.879136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.879169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.879359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.879392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.879509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.879542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.879722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.879755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.879874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.879907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.880132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.880166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.880361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.880393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.880631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.880663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.880899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.880931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.881153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.881188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.881424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.881456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.881626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.881659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.881901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.881934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.990 [2024-11-19 09:29:53.882087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.990 [2024-11-19 09:29:53.882121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.990 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.882363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.882396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.882583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.882615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.882796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.882829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.883000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.883033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.883316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.883348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.883478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.883512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.883693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.883725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.883908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.883941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.884137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.884169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.884379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.884412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.884527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.884559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.884694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.884727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.884990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.885025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.885195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.885228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.885332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.885366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.885573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.885606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.885736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.885768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.886006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.886063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.886253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.886285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.886394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.886427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.886596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.886628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.886758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.886790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.887027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.887068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.887244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.887276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.887445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.887477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.887717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.887750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.887922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.887962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.888200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.888233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.888401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.888434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.888690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.888724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.888972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.991 [2024-11-19 09:29:53.889007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.991 qpair failed and we were unable to recover it. 00:27:52.991 [2024-11-19 09:29:53.889128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.889160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.889330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.889363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.889539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.889572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.889744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.889777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.889895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.889928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.890246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.890483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.890514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.890700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.890732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.890862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.890894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.891030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.891065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.891239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.891271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.891532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.891565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.891755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.891789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.891909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.891942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.892092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.892125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.892314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.892346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.892459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.892492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.892673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.892706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.892881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.892921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.893050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.893083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.893191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.893224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.893407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.893440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.893565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.893598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.893780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.893812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.894085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.894119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.894303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.894335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.894575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.894608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.894824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.894856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.895123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.895158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.895404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.895436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.895652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.895685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.895917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.896118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.896151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.896275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.896309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.896439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.896471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.992 qpair failed and we were unable to recover it. 00:27:52.992 [2024-11-19 09:29:53.896709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.992 [2024-11-19 09:29:53.896742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.896946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.896987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.897174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.897206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.897386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.897419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.897534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.897567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.897802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.897835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.898012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.898046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.898165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.898198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.898300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.898332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.898568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.898602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.898783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.898821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.898936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.898979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.899232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.899264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.899453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.899485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.899664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.899698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.899819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.899852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.900047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.900081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.900259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.900291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.900550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.900583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.900818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.900850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.901095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.901129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.901399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.901431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.901543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.901574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.901824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.901857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.902055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.902090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.902198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.902230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.902413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.902446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.902628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.902661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.902841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.902872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.903001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.903035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.903291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.903323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.903441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.903474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.903708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.903740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.903922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.903962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.904167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.993 [2024-11-19 09:29:53.904198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.993 qpair failed and we were unable to recover it. 00:27:52.993 [2024-11-19 09:29:53.904396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.904429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.904622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.904656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.904892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.904924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.905114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.905149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.905336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.905369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.905551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.905583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.905820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.905853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.906055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.906088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.906351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.906383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.906621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.906654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.906909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.906942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.907067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.907100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.907272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.907306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.907436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.907469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.907703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.907926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.907966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.908155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.908190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.908378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.908410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.908579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.908612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.908802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.908835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.909071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.909105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.909309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.909343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.909588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.909621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.909733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.909765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.910001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.910036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.910276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.910309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.910429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.910461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.910697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.910730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.910974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.911009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.911146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.911178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.911374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.911407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.911536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.911568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.911697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.911730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.911903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.911935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.912202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.912234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.912496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.912529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.994 [2024-11-19 09:29:53.912698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.994 [2024-11-19 09:29:53.912731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.994 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.912956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.912989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.913251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.913283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.913472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.913504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.913620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.913652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.913769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.913802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.914039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.914073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.914262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.914300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.914485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.914518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.914780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.914813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.914938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.915234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.915266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.915533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.915565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.915681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.915713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.915886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.915919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.916048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.916082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.916187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.916220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.916501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.916533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.916803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.916834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.917039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.917073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.917249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.917281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.917460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.917494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.917689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.917722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.917946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.917990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.918192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.918453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.918485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.918700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.918732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.918942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.918987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.919107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.919140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.919308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.919340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.919585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.919617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.919827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.919860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.920099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.920134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.920260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.920293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.920502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.920540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.920728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.920759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.920863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.920897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.921086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.995 [2024-11-19 09:29:53.921120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.995 qpair failed and we were unable to recover it. 00:27:52.995 [2024-11-19 09:29:53.921247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.921279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.921397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.921430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.921561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.921593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.921766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.921798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.921908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.921941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.922147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.922180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.922379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.922412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.922598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.922630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.922751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.922783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.922901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.922934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.923131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.923165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.923284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.923318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.923515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.923547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.923648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.923680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.923872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.923905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.924178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.924212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.924322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.924355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.924560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.924592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.924779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.924813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.924995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.925029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.925161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.925193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.925304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.925337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.925556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.925706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.925745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.925862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.925897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.926109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.926144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.926252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.926285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.926509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.926541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.926645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.996 [2024-11-19 09:29:53.926677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.996 qpair failed and we were unable to recover it. 00:27:52.996 [2024-11-19 09:29:53.926915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.926956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.927149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.927181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.927445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.927477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.927607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.927639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.927769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.927802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.927974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.928007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.928186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.928218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.928337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.928370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.928570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.928604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.928834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.928867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.929003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.929037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.929275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.929307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.929495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.929528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.929703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.929736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.929934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.929975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.930206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.930239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.930509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.930542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.930727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.930760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.930955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.930989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.931117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.931150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.931329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.931362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.931632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.931754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.931787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.931925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.931968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.932144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.932177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.932458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.932491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.932663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.932695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.932830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.932863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.933101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.933135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.933307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.933339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.933473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.933505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.933625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.933656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.933855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.933889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.934099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.934131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.934339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.934370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.934550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.997 [2024-11-19 09:29:53.934588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.997 qpair failed and we were unable to recover it. 00:27:52.997 [2024-11-19 09:29:53.934712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.934744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.934998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.935031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.935202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.935233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.935413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.935445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.935566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.935600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.935795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.935829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.936117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.936151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.936344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.936377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.936557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.936590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.936761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.936794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.936992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.937026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.937160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.937192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.937313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.937346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.937546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.937578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.937771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.937805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.938004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.938038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.938222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.938254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.938438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.938471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.938668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.938701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.938872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.938905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.939019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.939052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.939306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.939339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.939473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.939506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.939635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.939667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.939929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.939970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.940157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.940190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.940315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.940358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.940574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.940607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.940730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.940762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.940875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.940908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.941097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.941130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.941341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.941373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.941491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.941523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.941711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.941744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.942004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.998 [2024-11-19 09:29:53.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.998 qpair failed and we were unable to recover it. 00:27:52.998 [2024-11-19 09:29:53.942170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.942202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.942440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.942473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.942659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.942692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.942810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.942842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.943083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.943117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.943301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.943333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.943594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.943627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.943819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.943852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.943985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.944018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.944307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.944340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.944508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.944540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.944643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.944676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.944860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.944893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.945104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.945136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.945376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.945408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.945592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.945626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.945749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.945782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.946065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.946098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.946211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.946249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.946422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.946456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.946713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.946745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.946944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.946986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.947173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.947205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.947383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.947586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.947617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.947806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.947839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.947960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.947994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.948111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.948142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.948257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.948291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.948429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.948462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.948642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.948674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.948797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.948831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.948964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.948997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.949104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.949137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.949266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.949299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.949497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.999 [2024-11-19 09:29:53.949530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:52.999 qpair failed and we were unable to recover it. 00:27:52.999 [2024-11-19 09:29:53.949706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.949738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.949850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.949881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.950002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.950035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.950147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.950178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.950435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.950469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.950589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.950621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.950793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.950825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.951010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.951043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.951307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.951337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.951525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.951557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.951749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.951782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.951964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.951998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.952225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.952257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.952378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.952410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.952648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.952859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.952891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.953154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.953188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.953304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.953335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.953515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.953547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.953652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.953685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.953792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.953825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.953990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.954023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.954289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.954602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.954674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.954882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.954919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.955203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.955236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.955503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.955536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.955748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.955780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.956018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.956052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.956189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.956221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.956420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.956451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.956734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.956767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.956961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.956995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.957183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.957215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.957395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.000 [2024-11-19 09:29:53.957428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.000 qpair failed and we were unable to recover it. 00:27:53.000 [2024-11-19 09:29:53.957603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.957636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.957759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.957800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.958013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.958047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.958180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.958212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.958332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.958364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.958574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.958606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.958791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.958824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.958933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.958976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.959165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.959197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.959435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.959467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.959605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.959637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.959757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.959790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.959991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.960026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.960283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.960315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.960500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.960531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.960669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.960989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.961022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.961213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.961245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.961423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.961453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.961571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.961601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.961837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.961869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.962137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.962171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.962370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.962403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.962609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.962641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.962816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.963027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.963061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.963299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.963331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.963444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.963744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.963776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.963896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.001 [2024-11-19 09:29:53.963927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.001 qpair failed and we were unable to recover it. 00:27:53.001 [2024-11-19 09:29:53.964147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.964180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.964358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.964390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.964602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.964633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.964813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.964844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.964972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.965006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.965137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.965169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.965381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.965414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.965531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.965563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.965699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.965731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.965867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.965899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.966095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.966127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.966252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.966291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.966557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.966588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.966692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.966723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.966909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.966941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.967136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.967168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.967308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.967339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.967517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.967548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.967754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.967786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.967989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.968022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.968199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.968230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.968386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.968418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.968539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.968570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.968834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.968866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.969171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.969205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.969332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.969363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.969477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.969508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.969640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.969673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.969908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.969940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.970122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.970154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.970273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.970306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.970422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.970454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.970627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.970659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.970894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.970926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.971122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.002 [2024-11-19 09:29:53.971156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.002 qpair failed and we were unable to recover it. 00:27:53.002 [2024-11-19 09:29:53.971364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.971396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.971582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.971796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.971827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.972045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.972084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.972216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.972248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.972454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.972486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.972694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.972726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.972895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.972926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.973117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.973150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.973275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.973306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.973469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.973660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.973693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.973933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.973976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.974264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.974296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.974485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.974517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.974702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.974734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.974908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.974940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.975140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.975173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.975289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.975321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.975503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.975535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.975641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.975671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.975866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.975898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.976052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.976085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.976290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.976323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.976474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.976505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.976625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.976656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.976917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.976961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.977076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.977107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.977344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.977376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.977640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.977856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.977890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.978155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.978188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.978306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.978338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.978454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.978485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.003 [2024-11-19 09:29:53.978720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.003 [2024-11-19 09:29:53.978752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.003 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.978876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.978909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.979108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.979141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.979264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.979295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.979555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.979587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.979871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.979904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.980173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.980207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.980330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.980361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.980482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.980514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.980700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.980737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.980868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.980899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.981095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.981127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.981445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.981637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.981668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.981858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.981890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.982024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.982057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.982230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.982261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.982394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.982428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.982600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.982632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.982840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.982873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.982986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.983020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.983198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.983404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.983436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.983650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.983681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.983869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.983901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.984096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.984129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.984321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.984354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.984537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.984568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.984674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.984705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.984890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.984922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.985120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.985152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.985322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.985354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.985474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.985505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.985670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.985702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.985869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.985901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.986120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.004 [2024-11-19 09:29:53.986153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.004 qpair failed and we were unable to recover it. 00:27:53.004 [2024-11-19 09:29:53.986366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.986399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.986547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.986825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.986857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.987045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.987079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.987355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.987388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.987560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.987592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.987803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.987835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.987967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.988001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.988131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.988163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.988283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.988314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.988496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.988527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.988698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.988730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.988966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.988999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.989108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.989146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.989363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.989395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.989522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.989553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.989734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.989765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.989933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.989974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.990194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.990226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.990469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.990500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.990717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.990749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.990934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.990976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.991164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.991196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.991378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.991411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.991585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.991617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.991720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.991751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.991931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.991974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.992168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.992200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.992401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.992433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.992627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.992659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.992831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.992863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.993062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.993095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.993264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.993296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.993484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.993516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.993618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.993650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.005 [2024-11-19 09:29:53.993836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.005 [2024-11-19 09:29:53.993868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.005 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.993986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.994019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.994194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.994226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.994346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.994379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.994489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.994521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.994698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.994730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.994938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.994994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.995227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.995258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.995435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.995467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.995583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.995615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.995879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.995912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.996040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.996072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.996175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.996207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.996442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.996474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.996687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.996719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.996900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.996933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.997199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.997231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.997437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.997488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.997699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.997753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.997943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.997993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.998116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.998148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.998334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.998368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.998554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.998585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.998852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.998884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.998987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.999021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.999212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.999244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.999372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.999405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.999534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.999579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.999730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.006 [2024-11-19 09:29:53.999775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.006 qpair failed and we were unable to recover it. 00:27:53.006 [2024-11-19 09:29:53.999984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.000019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.000194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.000227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.000405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.000437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.000628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.000660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.000869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.000901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.001090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.001124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.001313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.001344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.001479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.001511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.007 qpair failed and we were unable to recover it. 00:27:53.007 [2024-11-19 09:29:54.001697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.007 [2024-11-19 09:29:54.001747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.001903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.001963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.002169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.002214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.002436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.002491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.002735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.002803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.003034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.003091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.003381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.003416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.003639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.003676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.003881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.003917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.004141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.004177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.302 [2024-11-19 09:29:54.004388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.302 [2024-11-19 09:29:54.004424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.302 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.004665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.004700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.004845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.004880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.005009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.005056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.005330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.005365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.005492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.005538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.005746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.005789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.006081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.006133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.006451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.006490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.006667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.006704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.006936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.006985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.007170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.007210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.007340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.007374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.007510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.007543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.007752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.007784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.007913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.007946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.008080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.008111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.008262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.008294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.008549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.008582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.008795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.008827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.009014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.009048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.009182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.009215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.009391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.009424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.009547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.009579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.009840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.009872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.010167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.010201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.010415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.010447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.010629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.010662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.010849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.010880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.011057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.011090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.011331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.011363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.011514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.011545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.011673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.011705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.011984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.012017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.012277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.012310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.012513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.012545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.012653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.012686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.012790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.303 [2024-11-19 09:29:54.012822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.303 qpair failed and we were unable to recover it. 00:27:53.303 [2024-11-19 09:29:54.013065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.013099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.013307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.013339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.013614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.013783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.013815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.014015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.014049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.014233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.014265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.014378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.014409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.014521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.014553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.014765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.014796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.015061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.015095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.015279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.015311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.015484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.015516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.015682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.015713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.015818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.015857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.016119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.016153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.016319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.016446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.016480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.016661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.016693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.016811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.016842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.016980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.017014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.017199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.017232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.017402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.017436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.017549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.017582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.017712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.017744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.017920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.017965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.018216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.018249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.018363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.018396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.018528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.018562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.018813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.018844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.019014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.019047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.019251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.019284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.019532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.019565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.019748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.019780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.019989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.020023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.020157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.020190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.020368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.020400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.020605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.020638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.020830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.304 [2024-11-19 09:29:54.020863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.304 qpair failed and we were unable to recover it. 00:27:53.304 [2024-11-19 09:29:54.020978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.021012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.021121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.021152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.021408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.021441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.021626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.021658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.021832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.021864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.022072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.022106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.022278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.022310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.022446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.022478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.022692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.022724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.022967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.023000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.023123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.023155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.023362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.023394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.023518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.023550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.023674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.023706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.023874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.023907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.024110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.024150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.024342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.024374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.024572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.024604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.024817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.024849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.025067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.025101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.025243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.025500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.025532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.025704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.025736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.025923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.025963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.026142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.026174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.026280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.026313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.026572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.026603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.026795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.026828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.027009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.027042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.027230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.027264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.027515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.027548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.027763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.027796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.027980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.028014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.028249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.028281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.028574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.028607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.028745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.028779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.305 [2024-11-19 09:29:54.028919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.305 [2024-11-19 09:29:54.028961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.305 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.029078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.029109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.029344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.029376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.029564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.029596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.029831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.029864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.029987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.030021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.030143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.030175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.030307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.030341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.030527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.030560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.030671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.030703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.030883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.030917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.031123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.031155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.031356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.031389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.031508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.031541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.031660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.031693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.031875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.031908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.032155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.032188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.032390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.032421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.032589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.032621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.032829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.032868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.033014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.033048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.033168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.033201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.033403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.033435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.033610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.033641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.033828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.033861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.034099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.034134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.034319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.034352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.034536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.034568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.034681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.034713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.034889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.034922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.035102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.035229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.035261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.035451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.035484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.306 [2024-11-19 09:29:54.035673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.306 [2024-11-19 09:29:54.035706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.306 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.035823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.035856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.035976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.036010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.036206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.036240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.036352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.036384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.036498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.036530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.036731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.036763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.037005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.037039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.037175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.037207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.037321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.037354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.037485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.037517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.037692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.037725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.037864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.037895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.038109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.038144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.038255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.038285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.038558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.038590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.038786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.038820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.038923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.038967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.039093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.039126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.039309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.039341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.039482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.039515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.039641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.039674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.039783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.039815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.039930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.039974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.040164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.040197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.040384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.040416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.040596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.040640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.040762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.040795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.040918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.040960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.041094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.041125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.041335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.041367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.307 [2024-11-19 09:29:54.041544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.307 [2024-11-19 09:29:54.041576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.307 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.041833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.041865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.041993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.042027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.042222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.042253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.042482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.042514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.042638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.042840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.042871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.042986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.043020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.043217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.043251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.043394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.043428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.043554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.043586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.043700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.043732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.043905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.043939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.044086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.044120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.044293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.044325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.044565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.044598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.044704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.044737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.044909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.044941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.045067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.045100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.045372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.045405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.045613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.045645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.045767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.045798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.045984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.046019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.046214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.046246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.046419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.046451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.046663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.046695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.046907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.046938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.047143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.047176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.047290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.047321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.047436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.047467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.047676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.047709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.047883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.047915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.048096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.048130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.048245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.048277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.048540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.048571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.048696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.048733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.048994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.049027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.049214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.049246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.308 qpair failed and we were unable to recover it. 00:27:53.308 [2024-11-19 09:29:54.049433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.308 [2024-11-19 09:29:54.049464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.049646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.049678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.049799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.049832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.050021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.050055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.050302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.050333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.050526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.050558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.050788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.050820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.050924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.050986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.051177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.051209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.051457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.051490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.051662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.051693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.051876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.051909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.052025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.052059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.052248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.052279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.052404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.052437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.052545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.052577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.052786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.052820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.053022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.053055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.053254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.053405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.053437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.053571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.053603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.053730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.053763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.054028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.054062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.054262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.054295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.054491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.054524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.054653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.054685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.054811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.054843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.055059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.055092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.055288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.055320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.055489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.055521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.055720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.055752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.055926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.055976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.056165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.056198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.056434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.056467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.056714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.056746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.056929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.309 [2024-11-19 09:29:54.056972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.309 qpair failed and we were unable to recover it. 00:27:53.309 [2024-11-19 09:29:54.057185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.057218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.057333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.057370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.057613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.057645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.057776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.057809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.057984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.058018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.058205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.058238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.058447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.058480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.058716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.058746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.058848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.058881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.059119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.059152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.059356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.059389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.059514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.059548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.059727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.059760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.059934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.059977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.060099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.060131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.060242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.060277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.060393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.060425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.060675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.060707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.060887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.060920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.061108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.061402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.061435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.061552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.061585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.061706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.061738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.061908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.061941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.062140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.062172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.062300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.062332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.062437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.062470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.062644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.062676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.062792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.062826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.062970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.063003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.063212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.063245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.063466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.063709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.063742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.063857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.063902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.064059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.064108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.064279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.064318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.064434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.310 [2024-11-19 09:29:54.064468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.310 qpair failed and we were unable to recover it. 00:27:53.310 [2024-11-19 09:29:54.064663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.064695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.064814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.064847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.065083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.065117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.065289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.065322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.065495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.065535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.065721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.065754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.065869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.065901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.066101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.066136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.066239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.066271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.066461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.066495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.066598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.066631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.066819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.066851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.066980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.067014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.067132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.067165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.067362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.067396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.067652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.067685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.067860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.067894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.068019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.068053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.068298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.068330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.068457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.068490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.068621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.068654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.068781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.068814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.068999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.069033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.069150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.069184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.069287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.069521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.069554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.069753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.069786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.070022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.070056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.070247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.070280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.070522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.070556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.070741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.070774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.071030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.071065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.071270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.071303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.071483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.071515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.071633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.071667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.071786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.071818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.071935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.071979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.311 qpair failed and we were unable to recover it. 00:27:53.311 [2024-11-19 09:29:54.072100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.311 [2024-11-19 09:29:54.072132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.072336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.072369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.072574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.072607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.072779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.072812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.072945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.072991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.073232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.073264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.073475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.073507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.073631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.073670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.073772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.073814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.074087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.074159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.074376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.074413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.074541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.074577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.074813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.074846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.075087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.075123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.075264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.075297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.075477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.075511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.075717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.075749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.075931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.075975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.076156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.076189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.076400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.076432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.076602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.076636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.076866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.076899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.077088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.077122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.077332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.077365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.077603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.077635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.077842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.077874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.078061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.078094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.078280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.078312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.078496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.078530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.078708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.078740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.078960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.078996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.079176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.079209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.079411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.312 [2024-11-19 09:29:54.079444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.312 qpair failed and we were unable to recover it. 00:27:53.312 [2024-11-19 09:29:54.079569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.079601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.079742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.079776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.079889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.079921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.080175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.080208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.080475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.080507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.080610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.080643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.080828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.080860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.081043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.081077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.081251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.081284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.081459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.081491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.081753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.081790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.081905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.081938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.082132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.082165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.082407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.082439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.082555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.082588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.082756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.082939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.082978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.083092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.083123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.083223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.083255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.083418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.083451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.083633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.083665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.083926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.083968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.084096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.084129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.084313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.084346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.084522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.084555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.084739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.084773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.084967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.085007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.085190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.085223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.085350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.085384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.085624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.085657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.085760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.085792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.085907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.085939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.086063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.086096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.086275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.086309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.086415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.086450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.086624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.086666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.086771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.086803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.086998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.313 [2024-11-19 09:29:54.087034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.313 qpair failed and we were unable to recover it. 00:27:53.313 [2024-11-19 09:29:54.087213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.087437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.087470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.087669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.087703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.087875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.087914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.088111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.088146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.088330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.088364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.088481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.088512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.088725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.088757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.088945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.088987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.089090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.089123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.089297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.089330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.089504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.089536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.089724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.089758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.089867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.089900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.090158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.090192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.090309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.090343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.090450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.090483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.090609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.090642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.090819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.090852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.090970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.091007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.091191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.091370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.091563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.091597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.091786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.091819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.092080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.092113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.092235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.092272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.092374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.092406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.092614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.092647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.092766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.092797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.093038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.093072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.093273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.093307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.093487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.093520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.093638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.093672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.093842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.093874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.314 [2024-11-19 09:29:54.094001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.314 [2024-11-19 09:29:54.094035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.314 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.094249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.094284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.094454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.094486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.094697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.094730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.094992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.095027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.095234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.095267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.095504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.095536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.095786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.095817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.096082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.096118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.096354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.096391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.096575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.096607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.096863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.096894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.097018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.097052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.097188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.097222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.097339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.097372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.097547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.097580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.097771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.097804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.097990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.098152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.098186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.098391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.098422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.098521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.098555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.098671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.098703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.098968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.099002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.099257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.099291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.099530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.099564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.099751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.099782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.099966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.100000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.100190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.100223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.100334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.100365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.100476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.100509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.100694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.100727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.100856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.100887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.101078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.101111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.101293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.101323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.101571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.101603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.101719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.101751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.315 qpair failed and we were unable to recover it. 00:27:53.315 [2024-11-19 09:29:54.101965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.315 [2024-11-19 09:29:54.102000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.102188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.102219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.102393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.102426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.102662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.102693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.102940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.103004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.103137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.103171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.103430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.103463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.103649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.103680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.103803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.103835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.103965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.103999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.104126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.104158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.104396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.104428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.104687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.104719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.104837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.104875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.105097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.105131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.105394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.105425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.105662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.105695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.105871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.105905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.106094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.106128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.106330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.106362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.106529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.106562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.106739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.106771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.106965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.107000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.107176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.107215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.107334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.107365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.107600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.107633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.107740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.107772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.107915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.107956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.108198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.108231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.108356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.108389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.108591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.108623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.316 [2024-11-19 09:29:54.108762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.316 [2024-11-19 09:29:54.108794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.316 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.108894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.108926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.109130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.109162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.109346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.109377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.109610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.109641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.109902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.109935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.110137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.110171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.110367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.110398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.110581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.110614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.110858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.110892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.111136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.111171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.111282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.111314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.111407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.111440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.111545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.111577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.111766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.111798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.111982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.112016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.112130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.112162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.112294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.112325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.112494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.112527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.112733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.112769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.113007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.113040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.113299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.113331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.113466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.113510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.113697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.113729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.113912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.113942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.114190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.114221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.114402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.114434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.114698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.114730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.114901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.114932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.115130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.115162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.115273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.115305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.115494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.115526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.115650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.115681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.115846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.115878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.116131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.116164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.116296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.317 [2024-11-19 09:29:54.116327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.317 qpair failed and we were unable to recover it. 00:27:53.317 [2024-11-19 09:29:54.116521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.116554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.116742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.116773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.116965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.116998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.117218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.117251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.117382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.117413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.117615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.117647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.117888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.117920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.118113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.118146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.118382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.118414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.118619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.118651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.118835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.118867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.119147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.119181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.119317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.119349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.119484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.119516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.119725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.119913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.119945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.120133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.120165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.120284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.120553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.120585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.120763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.120794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.120903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.120935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.121208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.121240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.121412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.121443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.121624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.121656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.121790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.121822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.122118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.122152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.122396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.122433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.122610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.122643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.122907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.122939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.123151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.123183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.123379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.123411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.123536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.123568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.123670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.123938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.123979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.124117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.124149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.124309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.318 [2024-11-19 09:29:54.124340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.318 qpair failed and we were unable to recover it. 00:27:53.318 [2024-11-19 09:29:54.124524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.124556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.124728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.124761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.124897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.124928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.125050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.125082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.125276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.125309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.125554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.125586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.125843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.125874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.126001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.126035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.126167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.126199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.126435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.126467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.126585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.126616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.126821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.126852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.127117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.127150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.127375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.127407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.127606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.127742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.127775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.127946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.127988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.128174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.128205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.128374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.128405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.128511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.128543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.128663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.128693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.128870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.128902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.129048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.129081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.129214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.129246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.129436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.129467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.129639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.129671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.129911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.129943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.130146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.130178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.130375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.130407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.130580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.130612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.130819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.130856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.131024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.131058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.131184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.131217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.131491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.131523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.131696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.131728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.131897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.319 [2024-11-19 09:29:54.131929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.319 qpair failed and we were unable to recover it. 00:27:53.319 [2024-11-19 09:29:54.132083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.132115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.132284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.132316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.132504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.132536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.132730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.132762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.132975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.133007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.133197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.133229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.133360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.133392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.133658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.133689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.133819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.133851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.134042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.134074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.134265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.134297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.134490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.134521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.134784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.134816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.134923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.134974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.135106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.135137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.135313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.135344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.135531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.135562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.135667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.135699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.135816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.135848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.136105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.136139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.136364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.136396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.136586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.136618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.136723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.136755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.136925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.136964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.137100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.137132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.137299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.137331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.137433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.137464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.137635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.137667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.137848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.137879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.138051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.138335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.138367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.138572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.138605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.138760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.138931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.138970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.139182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.320 [2024-11-19 09:29:54.139220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.320 qpair failed and we were unable to recover it. 00:27:53.320 [2024-11-19 09:29:54.139471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.139503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.139671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.139704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.139810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.139842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.140010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.140043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.140226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.140258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.140372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.140405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.140530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.140563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.140669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.140701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.140813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.140845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.141016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.141049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.141220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.141252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.141372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.141403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.141589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.141620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.141812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.141845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.142048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.142080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.142259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.142290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.142404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.142435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.142726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.142758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.142878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.142910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.143101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.143135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.143240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.143270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.143471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.143503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.143679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.143711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.143834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.143866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.144102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.144136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.144375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.144407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.144600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.144632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.144746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.144778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.145066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.145099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.145342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.145374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.145598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.321 [2024-11-19 09:29:54.145631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.321 qpair failed and we were unable to recover it. 00:27:53.321 [2024-11-19 09:29:54.145884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.145915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.146057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.146091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.146325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.146627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.146659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.146829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.146860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.146980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.147014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.147185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.147217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.147326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.147359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.147593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.147630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.147867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.147899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.148092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.148125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.148365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.148398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.148580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.148612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.148816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.148847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.149046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.149079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.149330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.149361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.149529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.149561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.149731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.149763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.150048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.150080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.150343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.150376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.150544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.150575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.150832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.150862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.151151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.151184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.151290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.151322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.151500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.151533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.151714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.151746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.151923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.151962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.152143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.152174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.152359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.152391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.152566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.152599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.152779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.152810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.152941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.152986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.153120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.153152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.153324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.153356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.153543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.153575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.153785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.153818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.153924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.153967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.154227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.322 [2024-11-19 09:29:54.154258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.322 qpair failed and we were unable to recover it. 00:27:53.322 [2024-11-19 09:29:54.154427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.154459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.154646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.154678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.154866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.154897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.155117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.155150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.155280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.155312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.155436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.155467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.155661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.155692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.155876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.155908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.156031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.156064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.156175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.156207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.156376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.156419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.156529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.156562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.156682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.156713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.156964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.156998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.157169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.157200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.157316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.157347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.157597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.157630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.157807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.157838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.157969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.158003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.158214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.158247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.158424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.158456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.158691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.158724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.158842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.158874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.159071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.159230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.159262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.159446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.159478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.159668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.159699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.159883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.159915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.160103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.160137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.160340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.160371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.160628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.160661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.160779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.160811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.160995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.161027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.161199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.161231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.161470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.161502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.161688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.161720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.161890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.161922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.323 qpair failed and we were unable to recover it. 00:27:53.323 [2024-11-19 09:29:54.162121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.323 [2024-11-19 09:29:54.162154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.162324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.162355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.162565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.162596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.162856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.162889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.163111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.163145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.163246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.163278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.163542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.163574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.163815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.163847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.164014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.164047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.164238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.164270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.164529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.164561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.164732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.164764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.164885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.164917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.165161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.165198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.165382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.165414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.165600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.165631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.165835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.165866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.166050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.166082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.166256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.166287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.166539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.166570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.166755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.166786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.166992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.167026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.167216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.167247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.167357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.167390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.167586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.167617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.167859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.167891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.168027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.168060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.168321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.168353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.168618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.168650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.168887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.168919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.169130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.169163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.169370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.169401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.169587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.169619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.169804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.169836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.170014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.170047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.170229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.170509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.170541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.170726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.324 [2024-11-19 09:29:54.170758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.324 qpair failed and we were unable to recover it. 00:27:53.324 [2024-11-19 09:29:54.170871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.170903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.171152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.171186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.171360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.171393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.171595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.171626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.171864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.171896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.172102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.172134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.172322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.172354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.172594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.172626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.172806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.172838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.173095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.173130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.173257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.173289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.173494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.173527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.173718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.173761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.173969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.174003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.174223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.174411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.174450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.174688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.174720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.174977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.175011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.175189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.175221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.175426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.175707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.175738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.175923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.175963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.176224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.176256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.176383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.176414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.176591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.176623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.176743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.176775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.177033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.177065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.177337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.177535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.177567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.177833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.177865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.177989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.178022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.178281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.178314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.178506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.178539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.178782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.178815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.179051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.179085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.179254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.179286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.179472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.179503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.325 [2024-11-19 09:29:54.179799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.325 [2024-11-19 09:29:54.179831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.325 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.180093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.180126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.180353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.180386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.180646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.180677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.180790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.180822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.181021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.181054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.181249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.181281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.181458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.181490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.181600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.181631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.181763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.181795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.182064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.182097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.182264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.182295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.182472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.182505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.182700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.182732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.182942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.182987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.183168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.183199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.183402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.183435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.183545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.183577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.183838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.183875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.184077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.184111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.184273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.184515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.184546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.184829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.184861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.185045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.185078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.185207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.185240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.185497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.185530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.185847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.185878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.186124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.186158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.186345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.186378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.186614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.186646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.186816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.186848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.186999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.187033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.187300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.187332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.326 [2024-11-19 09:29:54.187617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.326 [2024-11-19 09:29:54.187649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.326 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.187925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.187970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.188208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.188240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.188441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.188474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.188641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.188673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.188861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.188894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.189079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.189113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.189314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.189545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.189577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.189832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.189864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.189997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.190031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.190215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.190248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.190491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.190523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.190697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.190729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.191015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.191049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.191295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.191327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.191585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.191617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.191900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.191932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.192243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.192430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.192462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.192721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.192753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.193023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.193056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.193192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.193224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.193488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.193521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.193762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.193794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.194033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.194066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.194330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.194363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.194547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.194579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.194852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.194884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.195064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.195098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.195280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.195312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.195594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.195625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.195861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.195893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.196147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.196180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.196417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.196448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.196620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.196652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.196831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.196863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.197144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.197177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.327 qpair failed and we were unable to recover it. 00:27:53.327 [2024-11-19 09:29:54.197414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.327 [2024-11-19 09:29:54.197446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.197701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.197734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.197961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.198212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.198245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.198414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.198446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.198683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.198715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.198967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.199002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.199180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.199213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.199393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.199425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.199685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.199716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.200003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.200037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.200311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.200343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.200530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.200562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.200742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.200774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.200966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.201006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.201249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.201282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.201402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.201433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.201703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.201735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.201909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.201943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.202163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.202196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.202383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.202415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.202537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.202570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.202808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.202841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.203030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.203065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.203252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.203284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.203545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.203578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.203812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.203844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.204092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.204126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.204377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.204411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.204653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.204685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.204877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.204910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.205041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.205074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.205215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.205247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.205432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.205464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.205720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.205753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.205876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.205908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.206095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.206129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.206343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.328 [2024-11-19 09:29:54.206375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.328 qpair failed and we were unable to recover it. 00:27:53.328 [2024-11-19 09:29:54.206573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.206606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.206866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.206899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.207190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.207222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.207422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.207455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.207720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.207752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.207925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.207976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.208182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.208215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.208453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.208486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.208685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.208717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.208996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.209030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.209332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.209365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.209502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.209534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.209748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.209780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.210067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.210101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.210375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.210407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.210687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.210719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.211004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.211044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.211281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.211314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.211581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.211613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.211852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.211884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.212126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.212159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.212277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.212311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.212568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.212601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.212805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.212838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.213092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.213126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.213379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.213411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.213639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.213671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.213894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.213926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.214176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.214209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.214410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.214441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.214661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.214694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.214961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.214996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.215208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.215240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.215529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.215562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.215732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.215765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.215932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.215986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.216239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.216272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.329 [2024-11-19 09:29:54.216511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.329 [2024-11-19 09:29:54.216543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.329 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.216796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.216839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.217018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.217052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.217265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.217297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.217469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.217501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.217675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.217707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.217883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.217916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.218142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.218176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.218357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.218390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.218628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.218662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.218840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.218872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.219117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.219150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.219303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.219336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.219512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.219544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.219741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.219774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.219983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.220018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.220200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.220233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.220494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.220526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.220713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.220746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.220928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.220978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.221250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.221419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.221451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.221640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.221673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.221922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.221964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.222224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.222255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.222518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.222551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.222788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.222822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.223078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.223112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.223358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.223391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.223578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.223610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.223783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.223814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.224088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.224121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.224386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.224419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.224715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.224748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.224923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.224965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.225139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.225171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.330 [2024-11-19 09:29:54.225358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.330 [2024-11-19 09:29:54.225389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.330 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.225636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.225669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.225933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.225974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.226241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.226273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.226454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.226497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.226667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.226699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.226871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.226903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.227110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.227144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.227316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.227348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.227586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.227618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.227808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.227841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.228048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.228083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.228341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.228373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.228498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.228530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.228729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.228761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.229032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.229067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.229332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.229364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.229479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.229511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.229702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.229733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.230008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.230043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.230235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.230267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.230442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.230475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.230605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.230638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.230760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.230798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.230991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.231025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.231232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.231263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.231490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.231522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.231696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.231728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.231921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.231971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.232096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.232129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.232298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.232330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.232502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.232535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.232669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.232700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.232876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.232907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.233116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.233152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.331 qpair failed and we were unable to recover it. 00:27:53.331 [2024-11-19 09:29:54.233336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-11-19 09:29:54.233368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.233548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.233580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.233699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.233734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.233922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.233965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.234208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.234240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.234411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.234443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.234682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.234713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.234885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.234918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.235107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.235140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.235379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.235411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.235547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.235579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.235767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.235800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.236040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.236073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.236449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.236481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.236661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.236694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.236932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.236976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.237115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.237148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.237256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.237289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.237411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.237443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.237560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.237591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.237764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.237797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.237933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.237974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.238118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.238149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.238250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.238283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.238465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.238498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.238690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.238722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.238965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.239136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.239175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.239452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.239484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.239622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.239654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.239775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.239808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.240048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.240082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.240198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.240230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.240409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.240441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.240691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.240723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.240845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.240877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.240999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-11-19 09:29:54.241033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.332 qpair failed and we were unable to recover it. 00:27:53.332 [2024-11-19 09:29:54.241218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.241250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.241440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.241473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.241585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.241617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.241729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.241762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.242033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.242068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.242173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.242205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.242330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.242364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.242540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.242573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.242750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.242782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.242909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.242941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.243150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.243183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.243384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.243417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.243547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.243580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.243817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.243850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.244024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.244060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.244274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.244307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.244481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.244513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.244659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.244691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.244880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.244913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.245063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.245095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.245272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.245303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.245486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.245518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.245633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.245664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.245889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.245920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.246073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.246105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.246344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.246577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.246610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.246794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.246826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.247012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.247045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.247235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.247266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.247375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.247411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.247671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.247704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.247936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.247997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.248114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.248146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.248362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.248394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.248512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.248545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.248748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.248780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.248964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-11-19 09:29:54.248996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.333 qpair failed and we were unable to recover it. 00:27:53.333 [2024-11-19 09:29:54.249188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.249219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.249483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.249515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.249727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.249758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.249969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.250001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.250125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.250157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.250424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.250457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.250729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.250762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.250881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.250912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.251177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.251211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.251449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.251480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.251602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.251634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.251791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.251969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.252004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.252252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.252284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.252452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.252484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.252599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.252630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.252741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.252772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.253008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.253043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.253147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.253179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.253333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.253365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.253623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.253654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.253917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.253967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.254146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.254177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.254363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.254394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.254523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.254554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.254684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.254716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.254890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.254920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.255051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.255254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.255285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.255546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.255578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.255751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.255783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.256016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.256050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.256259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.256296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.256577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.256609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.256825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.256856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.257100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.257134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.257397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.257430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.334 [2024-11-19 09:29:54.257670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.334 [2024-11-19 09:29:54.257702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.334 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.257941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.257983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.258176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.258209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.258445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.258476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.258647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.258679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.258935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.258975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.259145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.259177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.259443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.259474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.259752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.259784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.260011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.260046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.260235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.260267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.260502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.260534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.260810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.260842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.261106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.261139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.261342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.261373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.261570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.261603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.261711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.261743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.261999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.262034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.262207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.262239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.262407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.262439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.262647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.262677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.262884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.262914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.263127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.263161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.263359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.263392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.263667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.263788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.263820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.263937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.263979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.264163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.264195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.264456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.264488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.264661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.264693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.264829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.264860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.264971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.265003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.265247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.265279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.265516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.335 [2024-11-19 09:29:54.265547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.335 qpair failed and we were unable to recover it. 00:27:53.335 [2024-11-19 09:29:54.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.265838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.266072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.266111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.266303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.266336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.266509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.266541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.266813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.266844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.267044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.267077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.267325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.267358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.267573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.267604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.267867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.267898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.268149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.268183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.268358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.268390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.268692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.268723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.268986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.269020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.269143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.269174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.269457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.269489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.269622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.269655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.269772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.269803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.269991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.270026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.270205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.270237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.270353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.270383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.270562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.270593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.270766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.270798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.270928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.270971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.271243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.271276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.271520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.271552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.271656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.271688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.271867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.271898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.272081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.272115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.272318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.272352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.272615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.272646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.272786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.272819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.273063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.273096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.273292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.273323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.273616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.273649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.273903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.273939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.274222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.274256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.274528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.274560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.336 [2024-11-19 09:29:54.274816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.336 [2024-11-19 09:29:54.274850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.336 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.275071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.275106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.275364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.275396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.275565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.275598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.275779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.275817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.276017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.276051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.276312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.276344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.276452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.276484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.276686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.276719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.276936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.276990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.277110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.277143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.277407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.277439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.277576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.277608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.277866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.277899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.278025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.278059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.278241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.278273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.278532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.278565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.278692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.278724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.278972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.279006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.279131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.279163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.279400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.279433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.279650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.279683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.279936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.279982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.280198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.280231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.280432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.280465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.280638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.280671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.280906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.280939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.281139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.281172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.281412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.281445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.281626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.281658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.281835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.281869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.282040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.282073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.282336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.282368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.282544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.282710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.282741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.282928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.282970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.283098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.283129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.283333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.337 [2024-11-19 09:29:54.283365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.337 qpair failed and we were unable to recover it. 00:27:53.337 [2024-11-19 09:29:54.283606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.283638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.283883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.283916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.284224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.284258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.284469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.284501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.284681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.284715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.284936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.285132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.285177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.285359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.285390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.285524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.285557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.285744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.285777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.286040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.286073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.286200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.286233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.286471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.286504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.286699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.286731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.287009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.287043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.287175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.287208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.287449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.287481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.287745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.287778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.287902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.287932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.288184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.288216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.288458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.288490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.288673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.288704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.288992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.289027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.289212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.289243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.289416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.289448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.289723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.289879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.289913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.290107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.290141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.290404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.290436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.290620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.290653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.290825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.290856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.291096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.291130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.291368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.291400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.291672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.291704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.291881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.291914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.292137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.292170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.292476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.292507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.338 qpair failed and we were unable to recover it. 00:27:53.338 [2024-11-19 09:29:54.292643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.338 [2024-11-19 09:29:54.292676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.292936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.292979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.293115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.293147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.293360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.293390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.293573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.293604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.293926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.294106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.294139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.294263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.294295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.294469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.294501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.294682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.294720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.294894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.294926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.295115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.295148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.295323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.295353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.295546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.295577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.295791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.295822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.296005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.296040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.296227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.296259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.296501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.296533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.296818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.296850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.297062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.297096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.297279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.297311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.297485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.297793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.297825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.297960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.297994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.298233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.298265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.298528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.298560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.298849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.298882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.299081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.299115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.299315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.299346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.299584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.299617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.299856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.299887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.300020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.300053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.300262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.300293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.300534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.300565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.300745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.300777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.339 [2024-11-19 09:29:54.300985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.339 [2024-11-19 09:29:54.301019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.339 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.301211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.301242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.301417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.301449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.301695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.301727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.301903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.301934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.302209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.302241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.302439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.302472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.302729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.302762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.303020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.303054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.303254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.303286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.303461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.303493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.303770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.303803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.304070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.304104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.304403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.304435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.304632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.304669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.304842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.304875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.305053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.305088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.305217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.305249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.305442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.305473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.305657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.305690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.305931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.305974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.306086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.306118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.306390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.306422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.306546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.306579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.306705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.306737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.307014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.307048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.307316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.307347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.307541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.307573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.307842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.307875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.308063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.308097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.340 [2024-11-19 09:29:54.308229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.340 [2024-11-19 09:29:54.308262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.340 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.308368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.308400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.308507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.308538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.308779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.308811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.309061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.309093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.309267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.309299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.309482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.309514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.309753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.309785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.309975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.310009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.310255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.310286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.310392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.310422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.310549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.310580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.310819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.310852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.311066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.311100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.311230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.311262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.311478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.311509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.311625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.311657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.311770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.311801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.312044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.312079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.312272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.312305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.312482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.312514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.312647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.312692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.312835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.312866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.313063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.313099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.313370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.313403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.313529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.313562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.313771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.313804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.313992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.314025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.314165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.314198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.314339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.314373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.314634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.314667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.314837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.314869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.315072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.315106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.315294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.315328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.315518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.315549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.315668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.315700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.315884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.315917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.316105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.341 [2024-11-19 09:29:54.316138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.341 qpair failed and we were unable to recover it. 00:27:53.341 [2024-11-19 09:29:54.316275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.316307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.316485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.316517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.316695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.316728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.316854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.316886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.317027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.317060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.317256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.317287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.317421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.317454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.317705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.317737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.317997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.318032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.318278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.318310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.318425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.318459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.318634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.318666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.318801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.318832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.318959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.318999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.319107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.319139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.319344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.319376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.319497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.319527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.319708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.319741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.319847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.319878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.320004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.320037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.320231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.320264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.320560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.320592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.342 [2024-11-19 09:29:54.320769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.342 [2024-11-19 09:29:54.320801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.342 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.320994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.321027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.321235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.321267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.321399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.321430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.321670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.321703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.321882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.321913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.322103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.322137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.322379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.322412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.322610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.322644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.322969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.323004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.323185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.323227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.323407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.323440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.323728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.323759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.323971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.324005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.324294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.324326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.324600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.324631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.324831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.324864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.324994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.325030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.325160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.325192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.325458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.325491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.325662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.325694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.325881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.325912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.326118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.326152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.326401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.326432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.326676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.326707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.633 qpair failed and we were unable to recover it. 00:27:53.633 [2024-11-19 09:29:54.326886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.633 [2024-11-19 09:29:54.326919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.327103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.327135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.327338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.327371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.327547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.327581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.327766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.327798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.327993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.328027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.328294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.328333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.328447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.328479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.328676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.328710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.328979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.329013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.329295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.329329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.329579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.329611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.329902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.329935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.330136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.330170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.330451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.330486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.330683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.331002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.331035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.331301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.331336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.331455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.331486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.331748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.331781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.332055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.332088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.332319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.332350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.332596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.332630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.332821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.332853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.333096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.333129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.333309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.333342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.333528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.333561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.333808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.333840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.334084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.334118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.334242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.334275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.334455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.334486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.334698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.334730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.334837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.334868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.335139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.335173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.335392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.335426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.335610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.335642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.335852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.335883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.336185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.336310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.336342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.634 qpair failed and we were unable to recover it. 00:27:53.634 [2024-11-19 09:29:54.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.634 [2024-11-19 09:29:54.336562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.336853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.336886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.337095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.337128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.337254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.337286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.337508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.337538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.337805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.337839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.338018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.338051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.338235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.338274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.338448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.338480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.338746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.338778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.338967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.339000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.339191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.339223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.339410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.339441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.339726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.339758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.339961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.339995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.340241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.340275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.340411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.340444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.340705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.340738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.340964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.340999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.341254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.341288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.341479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.341513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.341632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.341665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.341911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.341943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.342147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.342180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.342446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.342478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.342749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.342782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.342901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.342933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.343132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.343164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.343351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.343383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.343500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.343532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.343654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.343686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.343874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.343925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.344068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.344101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.344298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.344332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.344485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.344518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.344654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.344686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.344939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.344994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.345181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.635 [2024-11-19 09:29:54.345214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.635 qpair failed and we were unable to recover it. 00:27:53.635 [2024-11-19 09:29:54.345443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.345475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.345582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.345612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.345795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.345827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.346066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.346099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.346279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.346311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.346440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.346471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.346588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.346619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.346836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.346868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.347046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.347078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.347204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.347241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.347416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.347450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.347659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.347691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.347932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.347974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.348244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.348278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.348474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.348506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.348752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.348785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.348979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.349013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.349295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.349328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.349450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.349483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.349726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.349758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.350016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.350051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.350241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.350275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.350391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.350423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.350712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.350744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.350969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.351002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.351182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.351214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.351389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.351420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.351568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.351601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.351844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.351877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.352133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.352167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.352347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.352378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.352649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.352682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.352857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.352889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.353083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.353117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.353246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.353278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.353395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.353426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.353630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.353663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.353905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.353939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.636 [2024-11-19 09:29:54.354151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.636 [2024-11-19 09:29:54.354185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.636 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.354399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.354432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.354692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.354726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.354995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.355296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.355329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.355510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.355542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.355727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.355761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.355976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.356009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.356195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.356228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.356490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.356522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.356733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.356766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.356945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.356997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.357198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.357231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.357360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.357393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.357636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.357669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.357993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.358027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.358151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.358184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.358481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.358515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.358806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.358840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.359108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.359142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.359416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.359449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.359731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.359765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.360022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.360056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.360340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.360373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.360565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.360597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.360849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.360883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.361147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.361181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.361365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.361397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.361603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.361636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.361883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.361917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.362118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.362151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.362392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.362425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.362617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.362649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.362836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.362868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.637 qpair failed and we were unable to recover it. 00:27:53.637 [2024-11-19 09:29:54.363153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.637 [2024-11-19 09:29:54.363187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.363460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.363494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.363684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.363717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.363905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.363938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.364131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.364166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.364374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.364407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.364536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.364568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.364806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.364838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.365083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.365115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.365335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.365367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.365626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.365661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.365908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.365942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.366151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.366184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.366434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.366611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.366643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.366823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.366855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.367154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.367191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.367392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.367429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.367701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.367734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.367866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.367899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.368101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.368134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.368317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.368350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.368625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.368657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.368837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.368868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.369113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.369147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.369416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.369448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.369734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.369766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.369896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.369927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.370193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.370227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.370473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.370506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.370703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.370736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.370915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.370961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.371226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.371259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.371395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.371426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.371697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.371729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.371929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.371974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.372219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.372253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.372361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.372391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.372574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.372605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.372737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.372770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.372945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.372992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.373280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.373312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.373488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.373520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.373768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.373802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.373990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.374028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.374319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.374353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.374474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.374505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.374703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.374736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.374920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.374964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.375090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.375123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.375296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.375328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.375573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.375605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.375850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.375883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.376072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.376107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.638 qpair failed and we were unable to recover it. 00:27:53.638 [2024-11-19 09:29:54.376350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.638 [2024-11-19 09:29:54.376382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.376566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.376599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.376838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.376873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.377063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.377104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.377402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.377436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.377581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.377612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.377811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.377842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.378110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.378145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.378342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.378375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.378553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.378724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.378756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.378874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.378905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.379112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.379147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.379367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.379614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.379648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.379844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.379877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.380080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.380115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.380249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.380284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.380485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.380519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.380710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.380743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.380923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.380965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.381165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.381196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.381312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.381345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.381530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.381563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.381779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.381813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.382080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.382117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.382292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.382325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.382498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.382532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.382784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.382816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.382956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.382989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.383247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.383328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.383566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.383602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.383900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.383933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.384137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.384173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.384301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.384333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.384542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.384575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.384701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.384734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.384979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.385013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.385209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.385241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.385415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.385449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.385589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.385622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.385811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.385844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.386062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.386097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.386218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.386251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.386384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.386419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.386550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.386583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.386762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.386795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.387041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.387076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.387267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.387300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.387460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.387492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.387616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.387650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.387832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.387865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.388038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.388072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.388314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.388348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.388485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.388518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.388715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.388750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.388992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.389027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.389156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.389189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.389456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.389489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.389762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.389796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.390072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.390107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.390313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.639 [2024-11-19 09:29:54.390346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.639 qpair failed and we were unable to recover it. 00:27:53.639 [2024-11-19 09:29:54.390611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.390644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.390844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.390877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.391057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.391091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.391333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.391365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.391637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.391670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.391883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.391916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.392109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.392142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.392391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.392424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.392610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.392644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.392836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.392870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.393018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.393053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.393291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.393325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.393452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.393485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.393662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.393694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.393972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.394006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.394201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.394233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.394477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.394510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.394777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.394810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.395067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.395101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.395349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.395382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.395564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.395597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.395864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.395896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.396169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.396209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.396394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.396428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.396672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.396705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.396896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.396930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.397186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.397430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.397463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.397728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.397761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.398046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.398082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.398355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.398388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.398601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.398633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.398828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.398861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.399123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.399158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.399415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.399704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.399737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.400033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.400067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.400253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.400285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.400528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.400562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.400820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.400854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.401124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.401158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.401339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.401372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.401637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.401670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.401912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.401955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.402211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.402245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.402450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.402483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.402614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.402646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.402827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.402860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.403128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.403163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.403431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.403470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.403752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.403785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.404059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.404092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.404374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.404407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.404664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.404698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.404822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.404855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.405121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.405155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.405433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.405466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.640 [2024-11-19 09:29:54.405746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.640 [2024-11-19 09:29:54.405779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.640 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.405970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.406005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.406292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.406326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.406509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.406542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.406658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.406692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.406968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.407003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.407274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.407308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.407560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.407593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.407897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.407931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.408072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.408105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.408349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.408383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.408646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.408678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.408876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.408908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.409099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.409135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.409330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.409363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.409564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.409598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.409876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.409909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.410163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.410197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.410463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.410497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.410625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.410658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.410871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.410905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.411190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.411226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.411496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.411530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.411731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.411764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.412028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.412064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.412352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.412385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.412603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.412636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.412929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.412972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.413183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.413216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.413491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.413525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.413813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.413846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.414067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.414102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.414378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.414410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.414728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.414762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.414957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.414993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.415182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.415224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.415421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.415453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.415632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.415665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.415934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.415978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.416182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.416214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.416459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.416492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.416697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.416731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.416932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.416977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.417227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.417261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.417514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.417547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.417806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.417839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.417969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.418004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.418211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.418246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.418433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.418467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.418660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.418693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.418943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.419001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.419279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.419313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.419594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.419627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.419905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.419939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.420223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.420258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.420446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.420480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.420681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.420714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.420895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.420929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.421205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.421238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.421500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.421533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.421777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.421817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.422115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.422150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.422431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.422463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.422733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.422767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.422995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.423030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.423211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.423244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.641 [2024-11-19 09:29:54.423497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.641 [2024-11-19 09:29:54.423531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.641 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.423774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.423807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.424031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.424066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.424210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.424243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.424511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.424545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.424731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.424765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.425033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.425068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.425374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.425409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.425663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.425697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.425820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.425852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.426120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.426156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.426433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.426467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.426709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.426743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.427006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.427041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.427290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.427324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.427619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.427653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.427919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.427963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.428210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.428244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.428371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.428405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.428670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.428704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.428884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.428918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.429193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.429233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.429418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.429452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.429664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.429697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.429871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.429904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.430183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.430218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.430484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.430517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.430697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.430731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.431002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.431038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.431228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.431262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.431447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.431480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.431750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.431784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.432065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.432100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.432294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.432327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.432627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.432661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.432900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.432934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.433134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.433168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.433358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.433391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.433574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.433608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.433916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.434115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.434149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.434274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.434307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.434521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.434555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.434775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.434808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.435035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.435071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.435349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.435383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.435591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.435624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.435900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.435933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.436220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.436260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.436409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.436443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.436688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.436721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.436905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.436939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.437099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.437133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.437422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.437456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.437728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.437787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.438077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.438112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.438388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.438422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.438621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.438655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.438839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.438873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.439151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.439187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.439384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.439417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.439652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.439868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.439903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.440109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.440144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.440344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.440379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.440608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.440879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.642 [2024-11-19 09:29:54.440913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.642 qpair failed and we were unable to recover it. 00:27:53.642 [2024-11-19 09:29:54.441205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.441241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.441508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.441542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.441794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.441828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.442131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.442167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.442462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.442496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.442790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.442823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.443072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.443108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.443301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.443335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.443613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.443646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.443848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.443882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.444082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.444117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.444389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.444424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.444673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.444707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.444968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.445004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.445300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.445334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.445602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.445637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.445916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.445973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.446240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.446275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.446568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.446602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.446812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.446847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.447058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.447095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.447400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.447435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.447564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.447599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.447778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.447812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.448025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.448060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.448262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.448296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.448476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.448510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.448811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.448845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.449056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.449285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.449320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.449619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.449653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.449916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.449959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.450204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.450237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.450379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.450414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.450695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.450729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.450922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.450968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.451253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.451288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.451571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.451605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.451785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.451819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.452097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.452133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.452447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.452481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.452761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.453070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.453105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.453393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.453427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.453703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.453736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.454024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.454061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.454282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.454315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.454515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.454549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.454802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.454837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.455024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.455066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.455271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.455306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.455583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.455617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.455896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.455928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.456159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.456193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.456463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.456498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.456619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.456652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.456905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.456939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.457235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.457270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.457486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.457519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.457697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.457731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.457864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.643 [2024-11-19 09:29:54.457898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.643 qpair failed and we were unable to recover it. 00:27:53.643 [2024-11-19 09:29:54.458168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.458204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.458385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.458419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.458728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.458762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.458965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.459001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.459285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.459320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.459609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.459642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.459864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.459898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.460104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.460139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.460391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.460425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.460697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.460730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.461006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.461043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.461252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.461287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.461536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.461570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.461764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.461798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.461987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.462022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.462276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.462316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.462596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.462630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.462907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.462940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.463230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.463264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.463469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.463504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.463619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.463653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.463850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.463884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.464159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.464195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.464464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.464498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.464694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.464727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.464988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.465024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.465326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.465359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.465618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.465652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.465929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.465973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.466206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.466240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.466509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.466543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.466803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.466837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.467139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.467174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.467445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.467479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.467752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.467785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.467999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.468034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.468286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.468319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.468528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.468563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.468828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.468862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.469156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.469192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.469479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.469513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.469705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.469738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.470015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.470050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.470249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.470283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.470578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.470611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.470799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.470833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.471035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.471070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.471345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.471378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.471657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.471691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.471983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.472018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.472245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.472280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.472552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.472587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.472802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.472836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.473120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.473155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.473434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.473467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.473688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.473722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.474039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.474075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.474373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.474408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.474619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.474652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.474904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.474938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.475095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.475365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.475399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.475663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.475697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.475999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.476034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.476235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.476268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.476542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.476578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.476852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.476885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.477073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.477108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.477290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.477324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.644 [2024-11-19 09:29:54.477478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.644 [2024-11-19 09:29:54.477511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.644 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.477772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.478073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.478109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.478368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.478401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.478701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.478735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.478927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.478979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.479183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.479217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.479518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.479556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.479761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.479796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.480071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.480108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.480342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.480376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.480626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.480659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.480841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.481128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.481164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.481416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.481456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.481747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.482045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.482081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.482345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.482381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.482566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.482599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.482878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.482911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.483107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.483144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.483261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.483296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.483582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.483617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.483802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.483836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.484034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.484069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.484268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.484302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.484579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.484614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.484844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.484878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.485093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.485129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.485406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.485441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.485740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.485773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.486041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.486078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.486334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.486369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.486554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.486588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.486787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.486823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.487028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.487064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.487337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.487370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.487572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.487606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.487880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.487915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.488183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.488219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.488398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.488433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.488690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.488730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.488934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.488979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.489259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.489293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.489564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.489599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.489723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.489757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.489970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.490005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.490304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.490339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.490600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.490635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.490936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.490996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.491270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.491306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.491550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.491584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.491777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.491812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.492087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.492123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.492316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.492350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.492584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.492618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.492870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.492905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.493199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.493236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.493524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.493558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.493828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.493862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.494156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.494192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.494463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.494497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.494621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.494655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.494907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.494940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.495155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.495189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.495386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.495420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.495693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.495728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.495945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.495991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.496270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.496311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.496577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.645 [2024-11-19 09:29:54.496611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.645 qpair failed and we were unable to recover it. 00:27:53.645 [2024-11-19 09:29:54.496793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.496827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.496969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.497280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.497315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.497513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.497548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.497801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.497835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.498133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.498169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.498389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.498426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.498677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.498712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.498913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.498973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.499174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.499209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.499505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.499540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.499827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.499861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.500138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.500175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.500445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.500480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.500694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.500728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.501010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.501045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.501324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.501359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.501608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.501642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.501856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.501890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.502221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.502256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.502460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.502495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.502698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.502733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.502985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.503022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.503217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.503527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.503561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.503745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.503780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.504059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.504097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.504309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.504343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.504548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.504582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.504855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.504889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.505082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.505117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.505300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.505585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.505621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.505817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.505851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.506035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.506071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.506269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.506305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.506584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.506619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.506919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.506966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.507219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.507254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.507545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.507580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.507787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.507822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.507945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.507994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.508192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.508225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.508447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.508482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.508734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.508768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.509032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.509068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.509332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.509367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.509560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.509595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.509865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.509900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.510173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.510210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.510407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.510441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.510576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.510610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.510810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.510844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.511129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.511166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.511354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.511389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.511649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.511684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.511985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.512021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.512284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.512318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.512600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.512635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.512863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.512897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.513109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.513146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.513372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.513407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.513602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.513637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.513821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.513856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.513983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.514019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.646 qpair failed and we were unable to recover it. 00:27:53.646 [2024-11-19 09:29:54.514238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.646 [2024-11-19 09:29:54.514272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.514525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.514565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.514759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.514793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.515067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.515103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.515386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.515420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.515619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.515654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.515880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.515914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.516201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.516238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.516452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.516486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.516679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.516713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.516918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.516963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.517218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.517252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.517533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.517566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.517846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.517880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.518133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.518170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.518476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.518511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.518707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.518741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.518935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.518981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.519252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.519286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.519535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.519570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.519823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.519858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.520158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.520194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.520400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.520435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.520708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.520741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.521020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.521056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.521256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.521292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.521543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.521578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.521792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.521825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.522088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.522128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.522407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.522442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.522718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.522752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.522983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.523020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.523323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.523357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.523631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.523665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.523920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.523966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.524104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.524139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.524358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.524393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.524613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.524648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.524923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.524967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.525196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.525231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.525441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.525475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.525736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.525771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.525997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.526034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.526310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.526345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.526493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.526528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.526802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.526837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.527136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.527172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.527644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.527682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.527986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.528026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.528212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.528246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.528432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.528466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.528714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.528748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.528941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.528999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.529259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.529294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.529564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.529598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.529882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.529916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.530218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.530256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.530534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.530570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.530764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.530798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.531003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.531039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.531265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.531299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.531551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.531584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.531778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.531812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.532004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.532038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.532224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.532258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.532457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.532492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.532767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.532801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.533122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.533158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.533458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.533492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.533713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.533748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.533970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.647 [2024-11-19 09:29:54.534005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.647 qpair failed and we were unable to recover it. 00:27:53.647 [2024-11-19 09:29:54.534217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.534250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.534500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.534534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.534735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.534767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.535067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.535103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.535389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.535423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.535703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.535738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.535967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.536001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.536203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.536236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.536535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.536570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.536768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.536802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.537053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.537087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.537268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.537302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.537509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.537543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.537744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.537777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.537990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.538027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.538219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.538439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.538473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.538759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.538793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.539072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.539109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.539394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.539430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.539561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.539594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.539896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.539930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.540216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.540250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.540471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.540505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.540758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.540794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.541076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.541118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.541343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.541377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.541656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.541689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.541895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.541928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.542219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.542255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.542529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.542563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.542768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.542802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.542989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.543025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.543212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.543246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.543398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.543432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.543682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.543716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.543845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.543879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.544089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.544126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.544378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.544411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.544676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.544711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.545028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.545064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.545341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.545376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.545654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.545688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.545975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.546012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.546162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.546196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.546379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.546413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.546687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.546722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.546858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.546892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.547122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.547158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.547432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.547466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.547673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.547708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.547857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.547891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.548176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.548218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.548433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.548468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.548657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.548690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.548876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.548909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.549212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.549248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.549443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.549478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.549664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.549698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.549961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.549998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.550209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.550244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.550494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.550528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.550712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.550747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.550936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.550986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.551181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.551213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.551502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.551534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.648 qpair failed and we were unable to recover it. 00:27:53.648 [2024-11-19 09:29:54.551741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.648 [2024-11-19 09:29:54.551776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.552052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.552088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.552341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.552376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.552627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.552662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.552913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.552958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.553082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.553116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.553390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.553425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.553614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.553648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.553911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.553946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.554142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.554177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.554384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.554420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.554694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.554727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.554979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.555016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.555151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.555190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.555471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.555749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.555785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.556050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.556086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.556307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.556340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.556458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.556494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.556713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.556747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.556940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.557004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.557305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.557339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.557612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.557647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.557895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.557930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.558228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.558444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.558479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.558662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.558696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.558967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.559002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.559254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.559288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.559583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.559621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.559872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.559907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.560177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.560214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.560498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.560535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.560810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.560844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.561033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.561068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.561193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.561228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.561420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.561455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.561706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.561742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.561998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.562033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.562335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.562369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.562568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.562826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.562860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.563060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.563097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.563347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.563381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.563617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.563654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.563933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.563979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.564112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.564146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.564260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.564293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.564496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.564529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.564710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.564744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.564945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.565009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.565262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.565296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.565580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.565613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.565829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.565865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.566067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.566110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.566291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.566325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.566527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.566828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.566864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.567145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.567180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.567386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.567422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.567614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.567648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.567901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.567934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.568128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.568162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.568412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.568447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.568626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.568659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.568788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.568820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.569001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.569036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.569153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.569187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.569402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.569437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.569745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.569778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.569903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.569939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.570169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.570205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.570508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.649 [2024-11-19 09:29:54.570542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.649 qpair failed and we were unable to recover it. 00:27:53.649 [2024-11-19 09:29:54.570752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.570786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.570972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.571007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.571283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.571318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.571601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.571635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.571913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.571945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.572239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.572273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.572459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.572494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.572718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.572751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.572904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.572944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.573141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.573175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.573488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.573522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.573658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.573693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.573870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.573907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.574198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.574233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.574461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.574496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.574718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.574753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.574934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.574979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.575102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.575138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.575323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.575357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.575597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.575631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.575823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.575859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.576067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.576104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.576392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.576428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.576566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.576601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.576879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.576913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.577067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.577104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.577308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.577342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.577642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.577676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.577972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.578008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.578311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.578346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.578597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.578630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.578889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.578924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.579219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.579254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.579447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.579487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.579699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.579733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.580009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.580053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.580188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.580221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.580411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.580445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.580646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.580680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.580933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.580988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.581180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.581213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.581396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.581431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.581649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.581683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.581826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.581859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.582042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.582077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.582203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.582238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.582421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.582455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.582668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.582704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.582984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.583020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.583220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.583254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.583437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.583470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.583665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.583699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.583963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.583998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.584279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.584312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.584583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.584617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.584927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.584971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.585250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.585284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.585490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.585523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.585852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.586039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.586074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.586261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.586294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.586473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.586508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.586759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.586792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.586995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.587029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.587326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.587360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.587597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.587629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.587821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.587855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.588107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.588143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.650 qpair failed and we were unable to recover it. 00:27:53.650 [2024-11-19 09:29:54.588447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.650 [2024-11-19 09:29:54.588481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.588691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.588724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.588994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.589031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.589223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.589257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.589522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.589556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.589830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.589864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.590083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.590119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.590320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.590355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.590549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.590584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.590857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.590890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.591044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.591081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.591287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.591324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.591529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.591562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.591839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.591872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.592049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.592085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.592231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.592268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.592473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.592507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.592808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.592842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.593057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.593092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.593291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.593324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.593587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.593620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.593889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.593922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.594145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.594180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.594330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.594366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.594585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.594620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.594769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.594803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.595078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.595115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.595345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.595380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.595566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.595600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.595820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.595854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.596106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.596141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.596328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.596363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.596648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.596681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.596823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.596857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.597092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.597126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.597375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.597414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.597679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.597713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.597970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.598007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.598148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.598183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.598384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.598418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.598624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.598660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.598803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.598837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.599035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.599071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.599277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.599312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.599433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.599464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.599681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.599714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.599991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.600025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.600217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.600252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.600502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.600536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.600762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.600796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.601016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.601051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.601256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.601290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.601523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.601556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.601688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.601722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.601905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.601939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.602144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.602178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.602337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.602372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.602652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.602687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.602883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.602917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.603135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.603170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.603425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.603459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.603682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.603715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.603934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.603989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.604128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.604162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.604389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.604422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.604711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.604744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.604940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.604987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.605116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.605150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.605341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.605374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.605516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.605550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.605833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.605868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.606138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.651 [2024-11-19 09:29:54.606174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.651 qpair failed and we were unable to recover it. 00:27:53.651 [2024-11-19 09:29:54.606394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.606428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.606703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.606738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.606921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.606963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.607109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.607142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.607358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.607392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.607622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.607656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.607776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.607810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.608006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.608043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.608173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.608207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.608481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.608516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.608706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.608740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.608920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.608962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.609135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.609406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.609440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.609692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.609725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.609927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.609976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.610170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.610205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.610325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.610366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.610654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.610688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.610992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.611028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.611229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.611262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.611404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.611439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.611708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.611743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.612024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.612059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.612208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.612242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.612491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.612524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.612707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.612741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.612889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.612924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.613062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.613098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.613280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.613315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.613521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.613556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.613814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.613850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.614156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.614193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.614409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.614443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.614634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.614668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.614973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.615010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.615273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.615629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.615663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.615900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.615934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.616153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.616188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.616492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.616524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.616717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.616753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.616956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.616991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.617113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.617148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.617349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.617383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.617635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.617669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.617810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.617844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.618072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.618296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.618329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.618513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.618546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.618798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.618831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.619016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.619053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.619241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.619276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.619482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.619515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.619699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.619732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.620007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.620044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.620327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.620362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.620567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.620599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.620784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.620825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.621090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.621125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.621353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.621387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.621512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.621546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.621742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.621775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.622073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.622109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.622301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.622335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.622476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.622511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.622785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.622819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.623058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.623258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.623292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.623476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.623510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.623701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.623734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.623860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.623893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.624113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.652 [2024-11-19 09:29:54.624149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.652 qpair failed and we were unable to recover it. 00:27:53.652 [2024-11-19 09:29:54.624421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.624456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.624644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.624678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.624859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.624893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.625096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.625131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.625385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.625420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.625710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.625743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.626019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.626054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.626342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.626377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.626515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.626548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.626842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.626876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.627143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.627178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.627358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.627391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.627674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.627714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.627842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.627874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.628000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.628035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.628229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.628263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.628473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.628506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.628778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.628814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.629007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.629044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.629251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.629285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.629565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.629600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.629792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.629827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.629988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.630022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.630224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.630257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.630455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.630488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.630596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.630627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.630835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.630869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.631132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.631167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.631354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.631389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.631587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.631620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.631833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.631867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.632072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.632106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.632403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.632438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.632703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.632737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.633034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.633071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.633216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.633250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.633502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.633535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.633716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.633750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.634029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.634277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.634321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.634447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.634481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.634755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.634790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.634994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.635030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.635175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.635208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.635514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.635550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.635686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.635721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.635978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.636013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.636264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.636297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.636429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.636462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.636660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.636694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.636899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.636933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.637145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.637181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.637384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.637416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.637605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.637639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.637837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.637870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.638070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.638105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.638232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.638265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.638470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.638503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.638688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.638721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.639042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.639195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.639230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.639484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.639519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.639823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.639858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.639988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.640021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.640298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.640332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.640483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.640517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.640716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.640750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.653 [2024-11-19 09:29:54.641033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.653 [2024-11-19 09:29:54.641068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.653 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.641292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.641325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.641461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.641496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.641776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.641810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.642076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.642111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.642393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.642426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.642621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.642655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.642941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.643163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.643198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.643393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.643428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.643678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.643712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.643903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.643936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.644088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.644124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.644401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.644434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.644620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.644654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.644863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.645168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.645203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.645424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.645461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.645666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.645701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.645884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.645917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.646134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.646168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.646317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.646350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.646558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.646591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.646788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.646823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.647081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.647116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.647343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.647376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.647491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.647809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.647845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.648035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.648070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.648310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.648344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.648544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.648578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.648706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.648740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.648942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.648989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.649104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.649139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.649282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.649318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.649571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.649605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.649791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.649824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.650018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.650053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.650271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.650304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.650491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.650524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.650716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.650755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.650970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.651006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.651210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.651244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.651426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.651459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.651720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.651754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.652006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.652042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.652296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.652330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.652468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.652501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.652681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.652713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.652998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.653035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.653168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.653201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.653458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.653492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.653714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.653748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.653967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.654003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.654296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.654329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.654557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.654591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.654869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.654903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.655185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.655221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.655461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.655496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.655789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.655822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.656095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.656129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.656364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.656398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.656722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.656755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.656936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.656981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.657181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.657215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.657468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.657503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.657632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.657666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.657866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.657905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.658174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.658210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.658344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.658377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.658651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.658684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.658865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.658898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.654 qpair failed and we were unable to recover it. 00:27:53.654 [2024-11-19 09:29:54.659174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.654 [2024-11-19 09:29:54.659209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.659477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.659511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.659709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.659744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.659917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.659974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.660228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.660263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.660539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.660794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.660828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.661022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.661058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.661282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.661317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.661578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.661613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.661805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.661968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.662003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.662281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.662316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.662612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.662645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.662942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.662986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.663210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.663244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.663484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.663518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.663750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.663885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.663918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.664224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.664259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.664460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.664494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.664643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.664678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.664880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.664919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.665156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.665192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.655 [2024-11-19 09:29:54.665398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.655 [2024-11-19 09:29:54.665432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.655 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.665698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.665732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.665914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.665959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.666234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.666268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.666483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.666518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.666773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.666807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.667095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.667132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.667345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.667378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.667560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.667600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.667873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.667905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.668196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.668231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.668504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.668536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.668811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.668845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.669142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.669176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.669391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.669422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.669552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.669583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.669866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.669899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.670185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.670218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.670498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.670533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.670798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.671051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.671088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.671228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.671261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.671532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.671565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.671756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.671789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.672073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.672109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.672345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.672378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.672645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.672680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.672875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.672907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.673121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.673157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.673404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.673438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.673736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.673769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.674057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.674093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.674310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.674344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.674600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.674633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.674916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.932 [2024-11-19 09:29:54.674962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.932 qpair failed and we were unable to recover it. 00:27:53.932 [2024-11-19 09:29:54.675163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.675196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.675450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.675483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.675708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.675740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.675999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.676035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.676335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.676375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.676657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.676690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.676969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.677004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.677291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.677325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.677613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.677646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.677921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.677965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.678221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.678256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.678550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.678583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.678841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.678875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.679098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.679132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.679423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.679632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.679667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.679873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.679907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.680232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.680268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.680504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.680539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.680788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.680820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.681088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.681122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.681333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.681367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.681560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.681593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.681704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.681737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.682006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.682042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.682240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.682273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.682550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.682585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.682863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.682896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.683107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.683141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.683407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.683440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.683656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.683688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.683945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.683996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.684209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.684242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.684585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.684840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.684874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.685027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.685061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.685264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.685296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.685507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.933 [2024-11-19 09:29:54.685541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.933 qpair failed and we were unable to recover it. 00:27:53.933 [2024-11-19 09:29:54.685836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.685869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.686070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.686103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.686251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.686283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.686432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.686465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.686695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.686727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.686861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.686893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.687126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.687403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.687437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.687705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.687738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.687932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.687993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.688190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.688223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.688479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.688513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.688735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.688767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.689018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.689051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.689353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.689386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.689686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.689720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.689905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.689937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.690258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.690292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.690496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.690529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.690826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.690859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.691062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.691103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.691305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.691337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.691586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.691618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.691805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.691838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.691987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.692021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.692225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.692259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.692497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.692530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.692747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.692780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.693079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.693114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.693274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.693307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.693500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.693532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.693726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.693760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.693984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.694019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.694220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.694253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.694453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.694485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.694718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.694753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.694981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.695015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.695317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.934 [2024-11-19 09:29:54.695350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.934 qpair failed and we were unable to recover it. 00:27:53.934 [2024-11-19 09:29:54.695592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.695625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.695825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.695858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.696058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.696092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.696226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.696259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.696569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.696602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.696796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.696828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.697121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.697154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.697352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.697386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.697583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.697615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.697898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.697931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.698211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.698246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.698528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.698561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.698809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.698843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.699128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.699164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.699387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.699420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.699693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.699725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.699907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.699940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.700154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.700188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.700441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.700474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.700781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.700815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.701014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.701048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.701254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.701287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.701563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.701596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.701801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.701834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.702023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.702057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.702217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.702251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.702452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.702485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.702678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.702710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.702970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.703003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.703229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.703263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.703459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.703491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.703756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.703789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.704087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.704121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.704325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.704564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.704596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.704734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.704766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.704990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.705025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.705253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.935 [2024-11-19 09:29:54.705287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.935 qpair failed and we were unable to recover it. 00:27:53.935 [2024-11-19 09:29:54.705590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.705623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.705882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.705915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.706078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.706112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.706368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.706402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.706717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.706750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.707002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.707036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.707192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.707225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.707483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.707516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.707708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.707741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.707884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.707917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.708077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.708112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.708367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.708400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.708696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.708734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.709005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.709039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.709175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.709206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.709439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.709472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.709776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.709809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.710056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.710089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.710311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.710344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.710493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.710527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.710721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.710754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.711050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.711084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.711365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.711398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.711614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.711647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.711887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.711919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.712063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.712097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.712260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.712294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.712442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.712475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.712671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.712704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.712898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.712931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.713207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.713241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.713437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.936 [2024-11-19 09:29:54.713471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.936 qpair failed and we were unable to recover it. 00:27:53.936 [2024-11-19 09:29:54.713603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.713635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.713778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.713811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.714012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.714046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.714316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.714350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.714532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.714566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.714820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.714853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.715058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.715316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.715355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.715539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.715572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.715767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.715800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.716053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.716088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.716304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.716337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.716605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.716638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.716883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.716915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.717122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.717157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.717303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.717336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.717604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.717637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.717888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.717919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.718236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.718271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.718423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.718457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.718589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.718621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.718807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.718840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.719050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.719085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.719377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.719410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.719544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.719577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.719771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.719803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.720001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.720035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.720265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.720299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.720433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.720466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.720757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.720790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.720975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.721009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.721200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.721233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.721373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.721405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.721616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.721649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.721850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.721882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.722116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.722151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.722280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.722312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.722538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.722570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.937 qpair failed and we were unable to recover it. 00:27:53.937 [2024-11-19 09:29:54.722796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.937 [2024-11-19 09:29:54.722829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.722978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.723014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.723211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.723243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.723396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.723429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.723649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.723682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.723884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.723917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.724191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.724226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.724372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.724539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.724571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.724853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.724887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.725138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.725173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.725317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.725349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.725542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.725574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.725850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.725883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.726108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.726143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.726335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.726368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.726497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.726529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.726738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.726772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.726970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.727136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.727169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.727306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.727338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.727498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.727532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.727781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.727813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.728052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.728086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.728252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.728286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.728435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.728468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.728605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.728638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.728884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.728917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.729203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.729237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.729445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.729478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.729675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.729708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.729902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.729934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.730068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.730102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.730361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.730544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.730576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.730828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.730860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.731072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.731107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.731245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.731284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.938 [2024-11-19 09:29:54.731479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.938 [2024-11-19 09:29:54.731511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.938 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.731729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.731762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.731941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.731987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.732222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.732255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.732462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.732495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.732621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.732653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.732930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.732974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.733282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.733316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.733470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.733502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.733775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.733808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.734191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.734442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.734476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.734820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.734853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.735132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.735168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.735364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.735397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.735535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.735568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.735762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.735795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.735998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.736032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.736158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.736190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.736332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.736365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.736552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.736585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.736861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.736894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.737059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.737093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.737295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.737328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.737592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.737624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.737828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.737860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.738110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.738157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.738363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.738398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.738602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.738635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.738821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.738853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.739089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.739124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.739325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.739358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.739553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.739585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.739775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.739809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.740034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.740068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.740272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.740306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.740526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.740559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.740688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.740720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.939 [2024-11-19 09:29:54.740924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.939 [2024-11-19 09:29:54.740969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.939 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.741182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.741216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.741375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.741408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.741678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.741711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.741992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.742028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.742312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.742346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.742491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.742523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.742731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.742764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.742994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.743029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.743256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.743290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.743427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.743459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.743762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.743795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.743994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.744029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.744243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.744277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.744543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.744575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.744773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.744811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.745055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.745088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.745247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.745281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.745436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.745467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.745707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.745739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.745942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.746005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.746209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.746243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.746446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.746479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.746760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.746792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.746993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.747027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.747232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.747265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.747390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.747422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.747692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.747724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.747923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.747965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.748130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.748163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.748368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.748402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.748681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.748714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.748970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.749003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.749294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.749328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.749647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.749680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.749928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.749973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.750192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.750225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.750377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.750411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.750731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.940 [2024-11-19 09:29:54.750764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.940 qpair failed and we were unable to recover it. 00:27:53.940 [2024-11-19 09:29:54.750971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.751005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.751258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.751290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.751432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.751465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.751688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.751721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.752000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.752037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.752292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.752325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.752536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.752569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.752833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.752866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.753172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.753206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.753399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.753432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.753745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.753779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.753920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.753964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.754153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.754186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.754334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.754576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.754608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.754868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.754901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.755058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.755092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.755241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.755273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.755531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.755564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.755787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.755820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.756014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.756048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.756232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.756265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.756466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.756500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.756790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.756823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.757114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.757148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.757289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.757321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.757475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.757507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.757713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.757745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.757940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.757986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.758288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.758323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.758534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.758567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.758765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.758798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.758992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.759026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.941 qpair failed and we were unable to recover it. 00:27:53.941 [2024-11-19 09:29:54.759299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.941 [2024-11-19 09:29:54.759331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.759538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.759571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.759762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.759795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.760072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.760106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.760309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.760342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.760611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.760646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.760923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.760966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.761201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.761341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.761374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.761617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.761650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.761911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.761942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.762184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.762223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.762426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.762459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.762669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.762703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.762964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.763000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.763154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.763187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.763417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.763450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.763771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.764045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.764080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.764279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.764312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.764438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.764470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.764610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.764643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.764867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.764901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.765057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.765091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.765313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.765345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.765541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.765575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.765707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.765740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.765930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.765974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.766184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.766218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.766381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.766414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.766730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.766763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.767001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.767278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.767518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.767552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.767851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.767883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.768211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.768245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.768440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.768471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.768680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.768713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.942 qpair failed and we were unable to recover it. 00:27:53.942 [2024-11-19 09:29:54.768918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.942 [2024-11-19 09:29:54.768969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.769171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.769204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.769406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.769438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.769573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.769606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.769911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.769943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.770209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.770243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.770370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.770403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.770679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.770713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.770912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.770944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.771186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.771219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.771411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.771444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.771678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.771710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.771900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.771934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.772200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.772234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.772383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.772416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.772561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.772595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.772776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.772808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.773089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.773123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.773390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.773422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.773678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.773711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.773968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.774003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.774159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.774192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.774384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.774417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.774725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.774759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.774882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.774915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.775199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.775233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.775434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.775467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.775761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.775795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.776000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.776034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.776310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.776343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.776489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.776521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.776740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.776773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.777050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.777084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.777247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.777278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.777460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.777492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.777791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.777825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.778051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.778086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.778239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.778272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.943 qpair failed and we were unable to recover it. 00:27:53.943 [2024-11-19 09:29:54.778468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.943 [2024-11-19 09:29:54.778499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.778805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.778839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.779049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.779082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.779291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.779324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.779453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.779486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.779697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.779731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.779983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.780020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.780224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.780257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.780403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.780434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.780856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.780892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.781080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.781117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.781372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.781406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.781651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.781685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.781906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.781941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.782106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.782140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.782344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.782378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.782589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.782622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.782763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.782796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.783074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.783109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.783243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.783275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.783473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.783504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.783759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.783794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.783993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.784026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.784168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.784202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.784459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.784492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.784801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.784834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.785075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.785109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.785328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.785361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.785500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.785532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.785668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.785702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.785970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.786010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.786157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.786189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.786413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.786727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.786761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.786970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.787004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.787281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.787314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.787565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.787597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.787857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.944 [2024-11-19 09:29:54.787890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.944 qpair failed and we were unable to recover it. 00:27:53.944 [2024-11-19 09:29:54.788215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.788250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.788451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.788484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.788766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.788798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.788989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.789024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.789184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.789217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.789441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.789619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.789758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.789791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.790092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.790127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.790379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.790412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.790669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.790701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.790913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.790946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.791228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.791261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.791488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.791521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.791776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.791810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.792003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.792038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.792243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.792276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.792478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.792511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.792674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.792706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.792997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.793040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.793223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.793256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.793480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.793515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.793703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.793736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.793938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.793985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.794111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.794143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.794342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.794376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.794600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.794633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.794991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.795026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.795277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.795310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.795593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.795625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.795768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.795801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.796015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.796050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.796205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.796238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.796390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.945 [2024-11-19 09:29:54.796423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.945 qpair failed and we were unable to recover it. 00:27:53.945 [2024-11-19 09:29:54.796562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.796595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.796892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.796926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.797107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.797141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.797384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.797417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.797631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.797664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.797970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.798005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.798190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.798223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.798478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.798512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.798714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.798747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.798972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.799006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.799146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.799178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.799369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.799401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.799669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.799710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.799896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.799930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.800086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.800120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.800375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.800408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.800675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.800708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.800972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.801006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.801209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.801242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.801460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.801493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.801757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.801788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.802085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.802121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.802339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.802372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.802509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.802541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.802797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.802831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.803061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.803097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.803256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.803290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.803492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.803525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.803765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.803798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.803966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.804254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.804287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.804493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.804524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.804742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.804775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.804992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.805028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.805228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.805261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.805487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.805520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.805747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.805781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.805911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.946 [2024-11-19 09:29:54.805944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.946 qpair failed and we were unable to recover it. 00:27:53.946 [2024-11-19 09:29:54.806152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.806185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.806323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.806355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.806555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.806588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.806786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.806819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.807092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.807126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.807344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.807378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.807603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.807635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.807839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.807873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.808011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.808046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.808249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.808281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.808430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.808462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.808745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.808779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.808985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.809019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.809271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.809303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.809521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.809554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.809769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.809804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.809944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.809991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.810196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.810230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.810425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.810457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.810699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.810732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.810917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.810976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.811199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.811233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.811382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.811417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.811561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.811594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.811821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.811854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.812073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.812109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.812314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.812347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.812506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.812538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.812730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.812763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.812974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.813008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.813286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.813319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.813506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.813541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.813694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.813726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.813978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.814012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.814218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.814254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.814397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.814568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.814601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.814875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.947 [2024-11-19 09:29:54.814909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.947 qpair failed and we were unable to recover it. 00:27:53.947 [2024-11-19 09:29:54.815122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.815158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.815354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.815388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.815668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.815702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.815988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.816023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.816341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.816380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.816538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.816572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.816798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.816830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.817034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.817069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.817283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.817315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.817463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.817496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.817723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.817757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.817965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.817999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.818145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.818179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.818324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.818357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.818636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.818669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.818882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.818915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.819145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.819179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.819303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.819336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.819472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.819505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.819748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.819781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.819972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.820006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.820283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.820317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.820452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.820484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.820751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.820783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.821007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.821041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.821202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.821423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.821456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.821794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.821829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.822073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.822337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.822373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.822524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.822556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.822831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.822871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.823070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.823105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.823366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.823401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.823739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.823772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.823981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.824014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.824237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.824270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.948 qpair failed and we were unable to recover it. 00:27:53.948 [2024-11-19 09:29:54.824531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.948 [2024-11-19 09:29:54.824564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.824767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.824801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.824997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.825032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.825230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.825263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.825408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.825441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.825802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.825837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.826119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.826154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.826291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.826324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.826564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.826598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.826807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.826841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.827050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.827086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.827228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.827261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.827399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.827433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.827672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.827705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.827901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.828247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.828282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.828496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.828531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.828802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.828834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.829052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.829086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.829238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.829271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.829487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.829522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.829802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.829837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.830092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.830129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.830265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.830300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.830462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.830496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.830689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.830721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.831009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.831044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.831298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.831331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.831645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.831678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.831878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.831911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.832203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.832238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.832467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.832500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.832782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.832818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.833007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.833042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.833230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.833262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.833388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.833422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.833753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.833788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.834006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.834041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.949 qpair failed and we were unable to recover it. 00:27:53.949 [2024-11-19 09:29:54.834174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.949 [2024-11-19 09:29:54.834207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.834332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.834367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.834579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.834612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.834894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.834927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.835159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.835193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.835400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.835435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.835674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.835709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.835925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.835969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.836161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.836196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.836383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.836416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.836558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.836593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.836798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.836833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.836986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.837022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.837136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.837169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.837430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.837464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.837601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.837634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.837907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.837940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.838101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.838135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.838275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.838309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.838568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.838602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.838879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.838913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.839090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.839123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.839285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.839455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.839488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.839709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.839967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.840001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.840149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.840184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.840301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.840333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.840466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.840500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.840698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.840733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.840991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.841028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.841237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.841268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.841399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.841432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.950 [2024-11-19 09:29:54.841569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.950 [2024-11-19 09:29:54.841602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.950 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.841740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.841774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.841915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.841961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.842093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.842125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.842274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.842307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.842505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.842539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.842762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.842795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.843067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.843101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.843229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.843261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.843401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.843435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.843753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.843786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.843976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.844011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.844155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.844189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.844332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.844365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.844626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.844660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.844914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.844963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.845098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.845131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.845325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.845359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.845667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.845705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.845907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.845941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.846127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.846160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.846300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.846332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.846484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.846517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.846664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.846697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.846975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.847009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.847159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.847193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.847389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.847424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.847564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.847599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.847737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.847769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.847971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.848005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.848191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.848224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.848422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.848455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.848596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.848630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.848749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.848782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.848983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.849017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.849217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.849251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.849381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.849413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.849548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.849582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.951 [2024-11-19 09:29:54.849867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.951 [2024-11-19 09:29:54.849902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.951 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.850027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.850249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.850282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.850488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.850522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.850710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.850743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.850924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.850989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.851199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.851231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.851367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.851407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.851591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.851624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.851749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.851781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.851908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.851940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.852094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.852127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.852254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.852286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.852481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.852513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.852648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.852821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.852854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.853102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.853137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.853280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.853313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.853517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.853549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.853747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.853779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.853889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.853921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.854083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.854117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.854232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.854264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.854462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.854495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.854633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.854666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.854781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.854814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.855024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.855061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.855199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.855232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.855359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.855389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.855532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.855565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.855778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.855811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.855946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.855991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.856176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.856211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.856417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.856450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.856591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.856626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.856833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.856867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.856983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.857017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.857137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.857168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.952 [2024-11-19 09:29:54.857299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.952 [2024-11-19 09:29:54.857331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.952 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.857465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.857499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.857703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.857736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.857876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.857909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.858114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.858148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.858367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.858401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.858618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.858650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.858855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.858888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.859191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.859226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.859356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.859391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.859601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.859634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.859962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.859998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.860250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.860284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.860414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.860447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.860655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.860689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.860882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.860916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.861119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.861153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.861300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.861334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.861485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.861519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.861708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.861742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.862026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.862061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.862396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.862430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.862661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.862693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.862979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.863014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.863177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.863211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.863352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.863385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.863531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.863563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.863884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.863916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.864227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.864262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.864458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.864492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.864741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.864776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.865076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.865110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.865313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.865345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.865610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.865643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.865911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.865944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.866161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.866194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.866396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.866427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.866657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.866697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.953 [2024-11-19 09:29:54.866962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.953 [2024-11-19 09:29:54.866996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.953 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.867276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.867310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.867529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.867563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.867848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.867882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.868154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.868190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.868398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.868433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.868723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.868756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.868971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.869006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.869160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.869192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.869340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.869373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.869650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.869682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.869990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.870025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.870276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.870308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.870520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.870554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.870703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.870735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.870989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.871024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.871159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.871191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.871322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.871354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.871563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.871597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.871853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.871886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.872121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.872155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.872276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.872308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.872561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.872594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.872794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.872827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.873020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.873054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.873243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.873276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.873503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.873541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.873751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.873785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.874013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.874047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.874245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.874277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.874420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.874453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.874703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.874736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.874973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.875006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.875136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.875170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.875366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.875398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.875671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.875705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.875929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.875974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.876250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.876282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.954 [2024-11-19 09:29:54.876435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.954 [2024-11-19 09:29:54.876468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.954 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.876773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.876806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.877009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.877045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.877241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.877274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.877477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.877509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.877815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.877849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.878003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.878038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.878315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.878347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.878494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.878527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.878753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.878785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.879008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.879043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.879188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.879220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.879471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.879505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.879729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.879767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.879965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.879999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.880203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.880236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.880382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.880414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.880550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.880583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.880856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.880889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.881047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.881081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.881284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.881317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.881525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.881558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.881736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.881768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.881973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.882007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.882214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.882430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.882463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.882670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.882701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.882991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.883026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.883178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.883210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.883449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.883483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.883703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.883881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.883913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.884213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.884246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.884479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.884511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.955 [2024-11-19 09:29:54.884708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.955 [2024-11-19 09:29:54.884741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.955 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.885001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.885035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.885231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.885264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.885454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.885488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.885757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.885788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.885991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.886024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.886230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.886262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.886405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.886439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.886588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.886620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.886810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.886843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.887066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.887101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.887322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.887355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.887499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.887530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.887804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.887838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.888108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.888142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.888378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.888411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.888721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.888754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.889014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.889048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.889250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.889281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.889548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.889581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.889799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.889831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.890111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.890146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.890413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.890452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.890739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.891044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.891078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.891224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.891256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.891468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.891501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.891704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.891737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.892011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.892045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.892324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.892355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.892552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.892584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.892831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.892863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.893089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.893122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.893273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.893305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.893517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.893549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.893768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.893801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.894002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.894035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.894293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.894325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.956 [2024-11-19 09:29:54.894526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.956 [2024-11-19 09:29:54.894557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.956 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.894832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.894864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.895142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.895176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.895313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.895345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.895490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.895522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.895721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.895754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.895985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.896020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.896172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.896204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.896453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.896486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.896756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.896790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.897002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.897036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.897223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.897261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.897544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.897576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.897788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.897822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.898001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.898034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.898181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.898212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.898436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.898469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.898668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.898700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.898961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.898995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.899200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.899233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.899383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.899414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.899695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.899728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.900009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.900042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.900198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.900232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.900421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.900453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.900741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.900775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.901080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.901114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.901310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.901342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.901592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.901624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.901929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.901975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.902251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.902283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.902649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.902681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.902880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.902911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.903067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.903101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.903242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.903273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.903479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.903511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.903730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.903762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.903968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.904002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.904256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.957 [2024-11-19 09:29:54.904294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.957 qpair failed and we were unable to recover it. 00:27:53.957 [2024-11-19 09:29:54.904582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.904614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.904908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.904941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.905156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.905189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.905485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.905517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.905765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.905797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.906017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.906051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.906283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.906317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.906445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.906477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.906734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.906766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.906969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.907003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.907152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.907185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.907461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.907493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.907771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.907803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.907934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.907993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.908272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.908304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.908548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.908581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.908855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.908888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.909098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.909132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.909417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.909664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.909696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.909892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.909924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.910185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.910219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.910442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.910476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.910731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.910764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.910972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.911006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.911206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.911238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.911449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.911481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.911797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.911829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.912055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.912089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.912365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.912397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.912705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.912739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.912944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.912988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.913287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.913319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.913651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.913683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.913971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.914005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.914194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.914226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.914426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.914458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.958 [2024-11-19 09:29:54.914757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.958 [2024-11-19 09:29:54.914789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.958 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.915059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.915094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.915397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.915429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.915651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.915683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.915970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.916004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.916283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.916316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.916567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.916598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.916810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.916843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.917061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.917096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.917377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.917410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.917647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.917679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.917868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.917900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.918034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.918068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.918362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.918394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.918666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.918699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.918997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.919030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.919235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.919267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.919505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.919538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.919789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.919821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.920088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.920121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.920370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.920402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.920728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.920761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.921018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.921052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.921330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.921362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.921586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.921618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.921885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.921918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.922138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.922171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.922419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.922452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.922701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.922734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.922879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.922912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.923228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.923268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.923468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.923501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.923788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.923819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.924053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.959 [2024-11-19 09:29:54.924086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.959 qpair failed and we were unable to recover it. 00:27:53.959 [2024-11-19 09:29:54.924337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.924370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.924609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.924641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.924907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.924939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.925118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.925151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.925401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.925432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.925727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.925758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.926037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.926089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.926246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.926278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.926422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.926454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.926651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.926682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.926936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.926982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.927192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.927226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.927461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.927493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.927721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.927752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.927957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.927991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.928272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.928307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.928451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.928483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.928756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.928790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.929018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.929278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.929310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.929440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.929472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.929678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.929709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.929973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.930008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.930174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.930214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.930434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.930466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.930740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.930772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.931023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.931057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.931262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.931295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.931468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.931675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.931706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.931898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.931930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.932163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.932197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.932332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.932365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.932563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.932595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.932806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.932838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.933055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.933091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.933278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.933309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.933444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.960 [2024-11-19 09:29:54.933477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.960 qpair failed and we were unable to recover it. 00:27:53.960 [2024-11-19 09:29:54.933628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.933661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.933892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.933925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.934066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.934099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.934371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.934403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.934612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.934645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.934842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.934875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.935130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.935165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.935318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.935349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.935599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.935631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.935911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.935944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.936235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.936268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.936462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.936494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.936619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.936650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.936800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.936833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.937028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.937062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.937218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.937250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.937408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.937440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.937642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.937676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.937963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.937997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.938191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.938225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.938404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.938652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.938685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.938806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.938838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.938972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.939006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.939228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.939260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.939519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.939552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.939842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.939876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.940111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.940144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.940297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.940330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.940601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.940635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.940826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.940857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.941133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.941167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.941375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.941407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.941648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.941681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.941881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.941914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.942113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.942147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.942340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.942371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.942530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.942564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.961 qpair failed and we were unable to recover it. 00:27:53.961 [2024-11-19 09:29:54.942877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-11-19 09:29:54.942910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.943047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.943080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.943341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.943374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.943676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.943709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.943936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.943982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.944232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.944265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.944482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.944514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.944715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.944747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.944975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.945008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.945266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.945299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.945549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.945581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.945788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.945821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.946054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.946089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.946372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.946404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.946645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.946677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.946885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.946924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.947207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.947240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.947448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.947480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.947759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.947791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.948077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.948111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.948354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.948386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.948705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.948736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.948967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.949001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.949140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.949174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.949316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.949348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.949465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.949496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.949700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.949732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.950027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.950063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.950315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.950346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.950499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.950532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.950726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.950758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.951092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.951125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.951373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.951406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.951627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.951658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.951922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.951965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.952177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.952210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.952353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.952385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.952522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.962 [2024-11-19 09:29:54.952554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.962 qpair failed and we were unable to recover it. 00:27:53.962 [2024-11-19 09:29:54.952807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.952840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.953044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.953080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.953290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.953322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.953616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.953647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.953894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.953933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.954252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.954285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.954480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.954512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.954707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.954739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.954922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.954965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.955169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.955203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.955353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.955384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.955562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.955594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.955875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.955907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.956127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.956161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.956301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.956333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.956467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.956500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.956701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.956733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.957039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.957074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.957219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.957251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.957452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.957484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.957826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.957858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.958133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.958168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.958366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.958397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.958594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.958627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.958896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.958928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.959195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.959228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.959412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.959444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.959690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.959722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.960001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.960036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.960234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.960267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.960460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.960492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.960753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.960791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.961072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.961107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.961320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.961354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.961611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.961642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.961936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.961983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.962243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.962276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.962615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.963 [2024-11-19 09:29:54.962648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.963 qpair failed and we were unable to recover it. 00:27:53.963 [2024-11-19 09:29:54.962921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.962962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.963214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.963246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.963452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.963483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.963759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.963792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.964049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.964083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.964284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.964316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.964500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.964532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.964822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.964855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.965190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.965401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.965433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.965776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.965924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.965967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.966117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.966149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.966350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.966381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.966604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.966882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.966915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.967105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.967308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.967340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.967473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.967506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.967731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.967981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.964 [2024-11-19 09:29:54.968015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:53.964 qpair failed and we were unable to recover it. 00:27:53.964 [2024-11-19 09:29:54.968311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.968345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.968549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.968581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.968835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.968868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.968998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.969032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.969161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.969193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.969356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.969388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.969597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.969630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.969943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.969988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.970133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.970164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.970288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.970320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.970534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.970568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.970817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.970850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.971135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.971168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.971445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.971479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.971697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.971730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.971922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.971973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.972127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.972160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.972378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.972412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.972565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.972600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.972822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.973022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.973056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.973203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.973236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.973472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.241 [2024-11-19 09:29:54.973505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.241 qpair failed and we were unable to recover it. 00:27:54.241 [2024-11-19 09:29:54.973692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.973723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.973958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.973993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.974198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.974230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.974351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.974384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.974549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.974582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.974741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.974933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.974980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.975192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.975226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.975422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.975454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.975673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.975705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.975895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.975927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.976147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.976180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.976407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.976439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.976668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.976700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.976989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.977024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.977177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.977210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.977351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.977383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.977560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.977598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.977790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.977823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.978055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.978091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.978297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.978330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.978568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.978600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.978813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.978844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.979051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.979086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.979231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.979263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.979410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.979441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.979628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.979660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.979855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.979891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.980110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.980145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.980300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.980333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.980459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.980492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.980729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.980763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.980988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.981023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.981206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.981238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.981370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.981403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.981624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.981659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.981883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.981915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.982140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.242 [2024-11-19 09:29:54.982175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.242 qpair failed and we were unable to recover it. 00:27:54.242 [2024-11-19 09:29:54.982391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.982424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.982541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.982574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.982797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.983059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.983094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.983242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.983275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.983488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.983520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.983777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.983816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.984084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.984119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.984322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.984355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.984509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.984542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.984767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.985075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.985110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.985362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.985394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.985631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.985664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.985846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.985880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.986154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.986189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.986339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.986371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.986577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.986610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.986812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.986844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.987122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.987156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.987353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.987386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.987590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.987623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.987746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.987776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.988087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.988122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.988243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.988275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.988476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.988508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.988731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.988764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.989025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.989060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.989267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.989299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.989455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.989488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.989718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.989751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.990028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.990062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.990207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.990240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.990376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.990408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.990637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.990670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.990854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.991140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.991175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.991364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.243 [2024-11-19 09:29:54.991397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.243 qpair failed and we were unable to recover it. 00:27:54.243 [2024-11-19 09:29:54.991616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.991649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.991845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.991878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.992134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.992167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.992379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.992412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.992616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.992648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.992847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.992878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.993074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.993108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.993273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.993308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.993514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.993547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.993753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.993786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.993994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.994027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.994284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.994318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.994519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.994552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.994825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.994858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.995119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.995153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.995304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.995336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.995542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.995574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.995904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.995937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.996094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.996128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.996410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.996444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.996698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.996731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.996988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.997022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.997218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.997249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.997512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.997546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.997816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.997848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.998056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.998090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.998289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.998322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.998579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.998612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.998804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.998836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.999023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.999057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.999246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.999278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.999534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.999568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.999708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:54.999740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:54.999990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:55.000023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:55.000256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:55.000290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:55.000501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:55.000533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:55.000763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:55.000801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:55.001073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.244 [2024-11-19 09:29:55.001108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.244 qpair failed and we were unable to recover it. 00:27:54.244 [2024-11-19 09:29:55.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.001392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.001529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.001561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.001894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.001927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.002240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.002275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.002477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.002508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.002670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.002703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.002895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.002928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.003146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.003179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.003379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.003412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.003710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.003744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.004045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.004080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.004365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.004398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.004715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.004748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.005025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.005060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.005350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.005383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.005663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.005696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.005841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.005873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.006101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.006136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.006412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.006444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.006666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.006699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.006979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.007012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.007173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.007206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.007430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.007463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.007807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.007841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.008063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.008097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.008302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.008341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.008474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.008507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.008717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.008749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.008888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.008919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.009095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.009130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.009345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.009378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.009731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.009764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.009996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.010031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.010221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.010254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.245 qpair failed and we were unable to recover it. 00:27:54.245 [2024-11-19 09:29:55.010461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.245 [2024-11-19 09:29:55.010494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.010721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.010868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.010900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.011209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.011243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.011445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.011611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.011644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.011940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.011984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.012146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.012180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.012426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.012459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.012685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.012717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.012932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.013101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.013136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.013353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.013386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.013664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.013697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.013878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.013911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.014139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.014175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.014427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.014459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.014675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.014708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.014895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.014933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.015160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.015194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.015396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.015427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.015724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.015758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.015940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.015989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.016187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.016220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.016447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.016480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.016688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.016722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.017021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.017055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.017193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.017227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.017500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.017533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.017756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.017789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.018118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.018153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.018362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.018395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.018597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.018630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.018907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.246 [2024-11-19 09:29:55.018940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.246 qpair failed and we were unable to recover it. 00:27:54.246 [2024-11-19 09:29:55.019201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.019235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.019445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.019478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.019741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.019774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.019972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.020006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.020208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.020239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.020394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.020427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.020749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.020782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.020974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.021008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.021149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.021182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.021332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.021366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.021676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.021708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.022013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.022047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.022200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.022233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.022364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.022398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.022675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.022707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.022920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.022962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.023214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.023247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.023400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.023433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.023702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.023736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.023933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.023978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.024270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.024304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.024505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.024538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.024809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.024841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.025036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.025071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.025209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.025243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.025447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.025481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.025803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.025836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.026145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.026178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.026322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.026354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.026642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.026676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.026926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.026971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.027174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.027208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.027359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.027392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.027600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.027634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.027831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.027863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.028055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.028089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.028292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.028325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.028593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.028626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.247 qpair failed and we were unable to recover it. 00:27:54.247 [2024-11-19 09:29:55.028756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.247 [2024-11-19 09:29:55.028788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.029013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.029049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.029188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.029220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.029418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.029452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.029603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.029636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.029919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.029979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.030178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.030380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.030412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.030735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.030768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.031036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.031071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.031391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.031424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.031690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.031727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.031921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.031962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.032110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.032143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.032339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.032379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.032590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.032625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.032870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.032902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.033127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.033163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.033307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.033341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.033471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.033504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.033804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.033837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.034041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.034075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.034273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.034305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.034430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.034462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.034799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.034832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.035098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.035134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.035368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.035647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.035681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.035840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.035873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.036081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.036117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.036321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.036353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.036475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.036509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.036706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.036739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.037003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.037038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.037233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.037266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.037428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.037461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.037595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.037627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.037831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.037864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.248 [2024-11-19 09:29:55.038096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.248 [2024-11-19 09:29:55.038133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.248 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.038293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.038326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.038554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.038772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.038810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.039053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.039090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.039345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.039379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.039614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.039647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.039909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.039941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.040159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.040192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.040379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.040412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.040566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.040599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.040793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.040825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.041098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.041133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.041284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.041315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.042977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.043041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.043289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.043323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.043483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.043517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.043739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.043773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.043981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.044016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.044218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.044253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.044457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.044489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.044711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.044744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.044945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.044993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.045198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.045231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.045445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.045480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.045706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.045742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.045945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.046187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.046221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.046486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.046520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.046776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.046810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.046999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.047032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.047244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.047278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.047434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.047469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.047797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.047834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.048087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.048122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.048422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.048454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.048689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.048723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.048973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.049012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.249 [2024-11-19 09:29:55.049212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.249 [2024-11-19 09:29:55.049246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.249 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.049400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.049433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.049578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.049611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.049877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.049909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.050161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.050196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.050403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.050437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.050693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.050772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.050933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.050986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.051209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.051243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.051389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.051423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.051637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.051667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.051885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.051919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.052145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.052178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.052371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.052405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.052683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.052715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.052849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.052882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.053004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.053039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.053241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.053273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.053480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.053514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.053817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.053861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.054055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.054088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.054342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.054373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.054620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.054652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.054912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.054956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.055110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.055143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.055430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.055616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.055649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.055879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.056130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.056165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.056423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.056456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.056654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.056685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.056971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.057007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.250 qpair failed and we were unable to recover it. 00:27:54.250 [2024-11-19 09:29:55.057128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.250 [2024-11-19 09:29:55.057162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.057320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.057353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.057539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.057570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.057853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.057885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.058143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.058177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.058378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.058410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.058703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.058736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.059045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.059078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.059230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.059261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.059403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.059436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.059714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.059746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.060006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.060039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.060225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.060256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.060452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.060483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.060705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.060746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.061019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.061055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.061242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.061275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.061472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.061507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.061764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.061799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.061959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.061994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.062151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.062186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.062391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.062423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.062706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.062740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.063001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.063036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.063267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.063301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.063496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.063530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.063714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.063747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.063964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.063999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.064191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.064225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.064409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.064442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.064603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.064635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.064751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.064789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.065064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.065100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.065238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.065271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.065485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.065519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.065760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.065795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.066050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.066084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.066286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.251 [2024-11-19 09:29:55.066321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.251 qpair failed and we were unable to recover it. 00:27:54.251 [2024-11-19 09:29:55.066532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.066566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.066702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.066735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.066877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.066910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.067127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.067168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.067381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.067416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.067646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.067679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.067983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.068018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.068205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.068240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.068360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.068393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.068532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.068565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.068762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.068795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.069078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.069113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.069257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.069290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.069419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.069452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.069707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.069968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.070002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.070149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.070182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.070395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.070428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.070615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.070649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.070847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.070881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.071170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.071206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.071407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.071440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.071565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.071597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.071783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.071816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.072028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.072063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.072268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.072302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.072584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.072619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.072922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.072970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.073122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.073156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.073428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.073462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.073582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.073619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.073753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.073786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.074029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.074162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.074196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.074403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.074436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.074559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.074591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.074713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.074751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.075007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.075041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.252 qpair failed and we were unable to recover it. 00:27:54.252 [2024-11-19 09:29:55.075167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.252 [2024-11-19 09:29:55.075200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.075357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.075389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.075622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.075851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.075883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.076052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.076089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.076220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.076254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.076464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.076498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.076695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.076728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.076856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.076889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.077104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.077138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.077259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.077291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.077400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.077433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.077629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.077663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.077797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.077831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.077941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.077992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.078181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.078214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.078397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.078432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.078616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.078650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.078778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.078812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.078968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.079003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.079153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.079186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.079311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.079344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.079611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.079644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.079912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.079982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.080123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.080155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.080373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.080406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.080614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.080646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.080772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.080805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.080968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.081004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.081187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.081221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.081350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.081384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.081594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.081627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.081755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.081787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.082043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.082078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.082205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.082238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.082496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.082528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.082675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.082708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.253 [2024-11-19 09:29:55.082829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.253 [2024-11-19 09:29:55.082861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.253 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.082986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.083019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.083141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.083174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.083360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.083394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.083582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.083616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.083874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.083907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.084146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.084180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.084439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.084472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.084706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.084738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.084938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.084986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.085205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.085239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.085434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.085467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.085664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.085696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.085878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.085911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.086177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.086212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.086424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.086457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.086572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.086604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.086779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.086812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.087068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.087102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.087253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.087285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.087498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.087531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.087784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.087817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.088013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.088046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.088248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.088287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.088432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.088464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.088721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.088753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.089028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.089061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.089216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.089249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.089376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.089408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.089609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.089640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.089915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.089959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.090220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.090253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.090388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.090421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.090545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.090578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.090757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.090789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.090945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.090993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.091146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.091179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.091371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.091402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.091615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.254 [2024-11-19 09:29:55.091648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.254 qpair failed and we were unable to recover it. 00:27:54.254 [2024-11-19 09:29:55.091835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.091868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.092155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.092190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.092315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.092347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.092490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.092523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.092726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.092758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.092973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.093007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.093134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.093167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.093368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.093401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.093621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.093654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.093927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.093969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.094179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.094212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.094346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.094616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.094651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.094905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.094937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.095096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.095130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.095385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.095417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.095723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.095757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.096001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.096037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.096327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.096359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.096706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.096739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.096970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.097004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.097223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.097257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.097543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.097576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.097785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.097816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.098027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.098063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.098296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.098329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.098481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.098515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.098794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.098825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.099058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.099092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.099245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.099277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.099480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.099512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.099704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.099738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.255 qpair failed and we were unable to recover it. 00:27:54.255 [2024-11-19 09:29:55.099925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.255 [2024-11-19 09:29:55.099980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.100178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.100210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.100358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.100392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.100671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.100704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.100973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.101008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.101273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.101306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.101525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.101563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.101766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.101798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.102028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.102062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.102256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.102288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.102542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.102575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.102824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.102858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.103065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.103099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.103376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.103409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.103721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.103753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.104013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.104047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.104262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.104295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.104478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.104509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.104784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.104818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.104932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.104976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.105142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.105175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.105401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.105434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.105648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.105680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.105861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.105895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.106121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.106156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.106276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.106307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.106506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.106539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.106803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.106990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.107026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.107303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.107337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.107668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.107700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.107923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.107982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.108239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.108272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.108525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.108557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.108742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.108776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.108970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.109005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.109218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.109253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.109506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.256 [2024-11-19 09:29:55.109539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.256 qpair failed and we were unable to recover it. 00:27:54.256 [2024-11-19 09:29:55.109746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.109778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.109985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.110018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.110177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.110209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.110416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.110447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.110664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.110695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.110996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.111030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.111163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.111196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.111391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.111424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.111579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.111613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.111898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.111930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.112228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.112263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.112537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.112569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.112750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.112783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.112975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.113008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.113224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.113259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.113452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.113483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.113785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.113817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.113963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.113997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.114137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.114170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.114358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.114391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.114522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.114554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.114696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.114727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.114927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.115176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.115208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.115402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.115434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.115629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.115662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.115809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.115844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.116099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.116132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.116328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.116361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.116491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.116523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.116717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.116751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.116943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.116987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.117257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.117291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.117422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.117454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.117592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.117626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.117878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.117911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.118107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.118148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.118264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.257 [2024-11-19 09:29:55.118296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.257 qpair failed and we were unable to recover it. 00:27:54.257 [2024-11-19 09:29:55.118436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.118469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.118669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.118701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.118959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.118992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.119260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.119293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.119490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.119523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.119776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.119810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.119925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.119974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.120171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.120204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.120430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.120463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.120666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.120700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.120971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.121007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.121199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.121230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.121530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.121565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.121792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.121824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.122107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.122140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.122260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.122291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.122489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.122523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.122716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.122748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.122882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.122913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.123126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.123160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.123292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.123327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.123526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.123559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.123754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.123786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.123900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.123933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.124221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.124255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.124381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.124419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.124629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.124662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.124845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.124875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.125205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.125239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.125499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.125532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.125782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.125815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.126076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.126111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.126252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.126286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.126478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.126509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.126624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.126657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.126933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.126977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.127121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.127154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.127356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.258 [2024-11-19 09:29:55.127388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.258 qpair failed and we were unable to recover it. 00:27:54.258 [2024-11-19 09:29:55.127582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.127616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.127824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.127858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.128116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.128150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.128293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.128328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.128530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.128562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.128749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.128782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.129082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.129117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.129298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.129330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.129540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.129572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.129705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.129737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.129871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.129904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.130116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.130150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.130330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.130363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.130575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.130609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.130829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.130863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.130987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.131022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.131148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.131180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.131323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.131570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.131602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.131867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.131900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.132109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.132143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.132346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.132379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.132568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.132601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.132799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.132830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.133114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.133146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.133355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.133387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.133599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.133631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.133831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.133862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.134097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.134131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.134334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.134366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.134553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.134587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.134778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.134810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.134999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.135031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.135254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.135286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.259 [2024-11-19 09:29:55.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.259 [2024-11-19 09:29:55.135623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.259 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.135912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.135945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.136250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.136282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.136544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.136576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.136769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.136801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.137069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.137102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.137377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.137409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.137702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.137734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.138005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.138041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.138251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.138283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.138485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.138516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.138722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.138753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.138959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.138993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.139186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.139217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.139491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.139523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.139716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.139748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.140012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.140047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.140348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.140380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.140558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.140590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.140783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.140814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.141091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.141124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.141408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.141446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.141729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.141760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.141939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.141981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.142260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.142293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.142488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.142519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.142778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.142810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.143021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.143299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.143332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.143593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.143625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.143872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.143904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.144224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.144258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.144537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.144569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.144841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.144875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.145124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.145157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.260 [2024-11-19 09:29:55.145356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.260 [2024-11-19 09:29:55.145388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.260 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.145641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.145674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.145973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.146006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.146290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.146322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.146602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.146634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.146924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.146965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.147233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.147264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.147479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.147511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.147772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.147804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.148147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.148407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.148438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.148638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.148670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.148973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.149008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.149296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.149339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.149570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.149602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.149877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.149910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.150203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.150237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.150502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.150536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.150748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.150781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.151025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.151058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.151183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.151214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.151487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.151519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.151707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.151738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.152012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.152045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.152239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.152270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.152500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.152532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.152779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.152810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.153133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.153167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.153465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.153498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.153723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.153755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.153963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.153996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.154180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.154212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.154392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.154425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.154650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.154683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.154935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.154979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.155105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.155138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.155410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.155442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.155713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.261 [2024-11-19 09:29:55.155747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.261 qpair failed and we were unable to recover it. 00:27:54.261 [2024-11-19 09:29:55.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.156028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.156253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.156285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.156560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.156598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.156855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.156889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.157175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.157209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.157493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.157526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.157811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.157844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.158122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.158156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.158441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.158474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.158749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.158782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.159069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.159103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.159326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.159359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.159496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.159528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.159708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.159740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.159934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.159979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.160237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.160270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.160536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.160614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.160916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.160974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.161260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.161294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.161536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.161569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.161779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.161811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.162019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.162052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.162327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.162358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.162648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.162682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.162887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.162919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.163184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.163217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.163466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.163499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.163723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.163757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.164016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.164049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.164245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.164288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.164569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.164602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.164874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.164905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.165138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.165171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.165424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.165457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.165715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.165746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.166053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.166087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.166344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.166375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.166644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.262 [2024-11-19 09:29:55.166675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.262 qpair failed and we were unable to recover it. 00:27:54.262 [2024-11-19 09:29:55.166926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.166967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.167115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.167148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.167448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.167480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.167675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.167707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.167889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.167919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.168143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.168177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.168455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.168487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.168764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.168796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.169010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.169044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.169348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.169381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.169573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.169603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.169793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.169823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.170098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.170131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.170436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.170467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.170674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.170706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.170988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.171020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.171298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.171329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.171588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.171620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.171930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.171981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.172179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.172212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.172459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.172491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.172714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.172747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.172974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.173009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.173289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.173321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.173524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.173555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.173855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.173886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.174180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.174214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.174484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.174514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.174719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.174750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.175016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.175050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.175327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.175358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.175554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.175592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.175813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.175845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.176108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.176142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.176283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.176315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.176589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.176621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.176887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.263 [2024-11-19 09:29:55.176919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.263 qpair failed and we were unable to recover it. 00:27:54.263 [2024-11-19 09:29:55.177142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.177175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.177450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.177482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.177734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.177766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.177906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.177938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.178259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.178292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.178545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.178576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.178763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.178795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.179095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.179128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.179338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.179370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.179499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.179530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.179783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.179814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.180081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.180114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.180371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.180402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.180584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.180616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.180795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.180827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.181093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.181126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.181407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.181439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.181762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.181794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.182013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.182046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.182225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.182256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.182509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.182541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.182827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.182860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.183052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.183086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.183360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.183666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.183698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.183973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.184006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.184301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.184332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.184536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.184568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.184848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.184879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.185079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.185112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.185388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.185420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.185664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.185695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.185905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.264 [2024-11-19 09:29:55.185937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.264 qpair failed and we were unable to recover it. 00:27:54.264 [2024-11-19 09:29:55.186147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.186179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.186461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.186499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.186696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.186728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.186986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.187021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.187228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.187260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.187452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.187483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.187597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.187627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.187893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.187925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.188157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.188190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.188463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.188495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.188720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.188751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.189007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.189041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.189233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.189264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.189552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.189583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.189775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.189805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.190069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.190103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.190379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.190410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.190688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.190720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.191011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.191044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.191264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.191296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.191569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.191600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.191793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.191824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.192035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.192070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.192297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.192328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.192580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.192611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.192819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.192850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.193120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.193153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.193360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.193392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.193674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.193705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.194007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.194040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.194311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.194343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.194645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.194676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.194943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.194999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.195255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.195286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.195498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.195533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.195783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.195815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.196031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.196065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.265 qpair failed and we were unable to recover it. 00:27:54.265 [2024-11-19 09:29:55.196271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.265 [2024-11-19 09:29:55.196304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.196517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.196551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.196750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.196786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.197081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.197114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.197434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.197473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.197701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.197733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.198034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.198068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.198275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.198307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.198575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.198606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.198785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.198817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.199022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.199056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.199249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.199281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.199461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.199492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.199693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.199725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.199936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.199985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.200288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.200320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.200620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.200652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.200850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.200881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.201207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.201242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.201398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.201709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.201741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.201979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.202012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.202284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.202316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.202535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.202566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.202815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.202845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.203097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.203130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.203397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.203429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.203724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.203755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.204027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.204060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.204358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.204389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.204539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.204571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.204757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.204789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.204994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.205298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.205329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.205665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.205698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.205991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.206024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.206256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.206287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.206537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.266 [2024-11-19 09:29:55.206568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.266 qpair failed and we were unable to recover it. 00:27:54.266 [2024-11-19 09:29:55.206848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.206880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.207096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.207129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.207379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.207411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.207677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.207708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.207915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.207959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.208213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.208245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.208431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.208468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.208678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.208710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.208911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.208943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.209218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.209250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.209444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.209476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.209753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.209785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.210046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.210080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.210297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.210329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.210584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.210616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.210839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.210870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.211076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.211109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.211339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.211372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.211586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.211617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.211807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.212091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.212126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.212406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.212438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.212679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.212710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.212923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.212966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.213171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.213203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.213408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.213440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.213617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.213648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.213917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.213962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.214164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.214196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.214395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.214426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.214641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.214673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.214854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.214888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.215183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.215217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.215424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.215457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.215598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.215924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.215972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.216165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.216200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.267 [2024-11-19 09:29:55.216389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.267 [2024-11-19 09:29:55.216421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.267 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.216566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.216598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.216802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.216834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.217056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.217090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.217315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.217346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.217582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.217615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.217863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.217895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.218050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.218083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.218288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.218320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.218580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.218618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.218917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.218957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.219116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.219149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.219355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.219389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.219535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.219567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.219839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.219870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.220178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.220212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.220409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.220441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.220603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.220634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.220916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.220959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.221244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.221276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.221569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.221601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.221824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.221854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.222081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.222115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.222283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.222316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.222509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.222541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.222733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.222766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.223081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.223115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.223356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.223388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.223609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.223641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.223914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.223946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.224071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.224103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.224364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.224395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.224599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.224632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.224882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.224913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.225190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.225224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.225412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.225444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.225748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.225792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.268 [2024-11-19 09:29:55.225915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.268 [2024-11-19 09:29:55.225961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.268 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.226143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.226175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.226380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.226413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.226544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.226576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.226693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.226723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.226977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.227011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.227296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.227330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.227632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.227663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.227967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.228256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.228288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.228487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.228519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.228698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.228729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.228956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.228989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.229276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.229308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.229497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.229529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.229733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.229764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.230043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.230075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.230290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.230322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.230530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.230563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.230779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.230810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.231069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.231103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.231306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.231337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.231652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.231685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.231938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.231983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.232280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.232311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.232519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.232551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.232839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.232872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.233152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.233185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.233396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.233428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.233608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.233640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.233957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.234139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.234170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.234356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.269 [2024-11-19 09:29:55.234387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.269 qpair failed and we were unable to recover it. 00:27:54.269 [2024-11-19 09:29:55.234597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.234628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.234849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.234882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.235134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.235167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.235426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.235457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.235707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.235739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.235887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.235919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.236147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.236185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.236375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.236407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.236610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.236641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.236922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.236962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.237237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.237269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.237379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.237411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.237635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.237667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.237861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.237893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.238036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.238383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.238416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.238595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.238626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.238755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.238787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.239076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.239109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.239406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.239438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.239696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.239726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.240015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.240048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.240329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.240360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.240641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.240672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.240972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.241005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.241269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.241302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.241592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.241623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.241912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.241944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.242176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.242207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.242460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.242491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.242683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.242714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.242912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.242944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.243237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.243268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.243536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.243570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.243833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.243864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.244126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.244161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.244444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.270 [2024-11-19 09:29:55.244477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.270 qpair failed and we were unable to recover it. 00:27:54.270 [2024-11-19 09:29:55.244752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.244784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.245092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.245126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.245334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.245366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.245642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.245675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.245960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.245995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.246195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.246226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.246499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.246532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.246666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.246697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.246905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.246936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.247168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.247207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.247534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.247566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.247755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.247787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.247997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.248032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.248225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.248257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.248451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.248483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.248620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.248652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.248860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.248892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.249150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.249361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.249394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.249524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.249555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.249683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.249715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.249992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.250026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.250221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.250253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.250473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.250506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.250656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.250688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.250809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.250840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.251027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.251060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.251191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.251223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.251480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.251512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.251730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.251761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.251979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.252013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.252125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.252156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.252435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.252467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.252754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.252787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.253001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.253035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.253236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.253268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.253559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.253592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.271 [2024-11-19 09:29:55.253848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.271 [2024-11-19 09:29:55.253880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.271 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.254079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.254113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.254225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.254257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.254478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.254511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.254715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.254748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.254940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.254985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.255178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.255211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.255400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.255432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.255555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.255586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.255837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.255870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.256089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.256122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.256236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.256268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.256409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.256447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.256580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.256612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.256736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.256767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.256969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.257002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.257193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.257225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.257416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.257448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.257702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.257733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.258013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.258047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.258156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.258187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.258383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.258415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.258694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.258726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.258879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.258912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.259125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.259159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.259378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.259410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.259690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.259723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.259846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.259879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.260132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.260360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.260391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.260667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.260698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.261000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.261034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.261164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.261195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.261301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.261333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.261453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.261484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.261692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.261725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.261870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.261902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.262055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.272 [2024-11-19 09:29:55.262089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.272 qpair failed and we were unable to recover it. 00:27:54.272 [2024-11-19 09:29:55.262337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.262369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.262576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.262609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.262793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.262824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.263013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.263047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.263308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.263340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.263492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.263525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.263710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.263741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.263965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.263999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.264110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.264141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.264343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.264376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.264496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.264528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.264652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.264684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.264877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.264909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.265118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.265152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.265354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.265390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.265531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.265563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.265757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.265787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.266042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.266076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.266280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.266312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.266533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.266566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.266812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.266844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.266997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.267032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.267231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.267264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.267512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.267544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.267820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.267852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.268062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.268095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.268274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.268305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.268420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.268452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.268667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.268698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.268925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.268969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.269153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.269185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.269309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.269341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.269589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.269621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.273 qpair failed and we were unable to recover it. 00:27:54.273 [2024-11-19 09:29:55.269824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.273 [2024-11-19 09:29:55.269856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.270127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.270161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.270307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.270339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.270461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.270493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.270679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.270711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.270893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.270924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.271180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.271213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.271462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.271493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.271702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.271735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.271857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.271889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.272083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.272118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.272374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.272407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.272595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.272627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.272817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.272850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.273121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.273155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.273412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.273444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.273644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.273679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.273891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.273965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.274247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.274286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.274493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.274525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.274711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.274747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.274960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.275009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.275283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.275315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.275506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.275538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.275790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.275822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.276014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.276065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.276285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.276324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.276509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.276541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.276687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.276720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.276913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.276945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.277154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.277187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.277392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.277423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.277542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.277575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.277708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.277740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.278017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.274 [2024-11-19 09:29:55.278051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.274 qpair failed and we were unable to recover it. 00:27:54.274 [2024-11-19 09:29:55.278210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.278258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.278518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.278582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.278977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.279057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.279252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.279309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.279602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.279639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.279848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.279883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.280192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.280230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.280502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.280537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.280684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.280718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.280861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.280900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.281169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.281206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.281497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.281533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.281859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.281910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.282173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.282218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.282450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.282489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.282692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.282726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.282919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.282968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.283085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.283391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.283425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.283549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.283581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.283794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.283827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.284083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.284118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.284332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.284364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.284636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.284667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.284854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.284886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.285133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.285167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.555 [2024-11-19 09:29:55.285314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.555 [2024-11-19 09:29:55.285355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.555 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.285639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.285672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.285819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.285850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.286037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.286072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.286247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.286278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.286483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.286514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.286632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.286663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.287228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.287269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.287475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.287507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.287733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.287766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.287911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.287943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.288166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.288200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.288397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.288428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.288716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.288750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.288978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.289014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.289270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.289304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.289501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.289533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.289782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.289816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.289994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.290028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.290223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.290255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.290389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.290421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.290608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.290639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.290770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.290804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.290940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.290988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.291147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.291182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.291375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.291407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.291582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.291615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.291813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.291847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.292034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.292067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.292193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.292224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.292358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.292389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.292594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.292626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.292861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.292892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.293030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.293063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.293260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.293292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.293404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.293438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.293640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.293670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.556 qpair failed and we were unable to recover it. 00:27:54.556 [2024-11-19 09:29:55.293789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.556 [2024-11-19 09:29:55.293820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.293985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.294277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.294309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.294485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.294523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.294826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.294858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.295039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.295073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.295197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.295228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.295409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.295440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.295578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.295611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.295856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.295888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.296153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.296187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.296364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.296394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.296583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.296615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.296801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.296833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.297032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.297065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.297251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.297283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.297572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.297767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.297798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.297984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.298016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.298192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.298224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.298347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.298381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.298579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.298609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.298823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.298856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.298983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.299017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.299205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.299238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.299433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.299466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.299670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.299702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.299813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.299844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.299973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.300006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.300111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.300144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.300322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.300398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.300550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.300587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.300710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.300743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.300867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.300900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.301104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.301138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.301316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.301348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.301534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.301566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.301756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.557 [2024-11-19 09:29:55.301789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.557 qpair failed and we were unable to recover it. 00:27:54.557 [2024-11-19 09:29:55.301904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.301936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.302131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.302163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.302342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.302374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.302501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.302534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.302727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.302759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.302961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.303005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.303130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.303161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.303279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.303310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.303483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.303516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.303689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.303721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.303901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.303933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.304065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.304099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.304287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.304319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.304505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.304538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.304766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.304799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.304903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.304934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.305072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.305105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.305295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.305327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.305571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.305601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.305733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.305765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.305890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.305922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.306052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.306084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.306289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.306321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.306459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.306490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.306623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.306655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.306916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.306956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.307131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.307165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.307357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.307388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.307583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.307615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.307742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.307774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.307925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.308132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.308164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.308376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.308413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.308535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.308567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.308753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.308785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.308997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.309031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.309157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.309190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.558 [2024-11-19 09:29:55.309322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.558 [2024-11-19 09:29:55.309354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.558 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.309528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.309560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.309760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.309794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.309913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.309945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.310229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.310262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.310454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.310486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.310596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.310629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.310818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.310850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.310971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.311011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.311195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.311229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.311405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.311436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.311561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.311592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.311769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.311800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.311966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.312000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.312255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.312286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.312456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.312487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.312683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.312714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.312831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.312861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.312993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.313026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.313291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.313324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.313498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.313528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.313657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.313689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.313824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.313858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.313994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.314028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.314159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.314191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.314454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.314487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.314609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.314641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.314832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.314864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.315100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.315134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.315256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.315288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.315478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.559 [2024-11-19 09:29:55.315510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.559 qpair failed and we were unable to recover it. 00:27:54.559 [2024-11-19 09:29:55.315696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.315727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.315904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.315937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.316222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.316468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.316500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.316690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.316721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.316909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.316942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.317142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.317175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.317380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.317413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.317696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.317727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.317849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.317880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.318021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.318055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.318241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.318273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.318391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.318422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.318557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.318589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.318797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.318829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.319045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.319077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.319273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.319313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.319519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.319558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.319685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.319716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.319912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.319945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.320131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.320164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.320338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.320368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.320547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.320579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.320706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.320738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.320860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.320891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.321116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.321149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.321341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.321374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.321510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.321541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.321747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.321780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.322006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.322040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.322218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.322250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.322358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.322390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.322508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.322539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.322746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.322778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.322966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.323000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.323133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.323164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.323356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.323387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.560 [2024-11-19 09:29:55.323491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.560 [2024-11-19 09:29:55.323523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.560 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.323798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.323828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.324019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.324053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.324241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.324275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.324399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.324430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.324544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.324576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.324764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.324795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.325028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.325101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.325372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.325407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.325601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.325633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.325846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.325969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.326002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.326249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.326280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.326460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.326492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.326691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.326722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.326832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.326864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.326982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.327015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.327199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.327231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.327346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.327377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.327502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.327532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.327726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.327766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.328055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.328087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.328211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.328242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.328439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.328470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.328717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.328747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.328852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.328883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.329068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.329102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.329293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.329323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.329493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.329524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.329714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.329746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.329878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.329908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.330124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.330156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.330441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.330633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.330665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.330858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.330890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.331038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.331072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.331275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.331309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.561 [2024-11-19 09:29:55.331563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.561 [2024-11-19 09:29:55.331594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.561 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.331710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.331743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.331945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.331988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.332161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.332194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.332408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.332442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.332564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.332594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.332773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.332804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.332913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.332944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.333076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.333107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.333287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.333317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.333558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.333630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.333793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.333829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.333962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.333995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.334199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.334231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.334442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.334472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.334736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.334767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.334888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.334918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.335204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.335474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.335505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.335768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.335799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.336011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.336045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.336284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.336315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.336515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.336547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.336789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.336836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.337010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.337043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.337151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.337182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.337369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.337400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.337638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.337669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.337828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.338017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.338049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.338170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.338200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.338409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.338440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.338679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.338711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.338879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.338910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.339041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.339075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.339278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.339309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.339516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.562 [2024-11-19 09:29:55.339548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.562 qpair failed and we were unable to recover it. 00:27:54.562 [2024-11-19 09:29:55.339668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.339700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.339806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.339837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.340104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.340258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.340291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.340481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.340513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.340725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.340757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.340874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.340905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.341108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.341142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.341391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.341421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.341574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.341605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.341785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.341815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.342098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.342132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.342316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.342365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.342502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.342713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.342743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.342939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.342981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.343101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.343132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.343252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.343283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.343459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.343492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.343672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.343702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.343820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.343850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.343964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.343999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.344139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.344171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.344354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.344385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.344553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.344583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.344754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.344785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.344974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.345006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.345207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.345238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.345391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.345629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.345660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.345836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.345866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.346037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.346070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.346308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.346340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.346568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.346598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.346731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.346761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.347000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.347034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.347251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.347281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.347468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.563 [2024-11-19 09:29:55.347499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.563 qpair failed and we were unable to recover it. 00:27:54.563 [2024-11-19 09:29:55.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.347654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.347762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.347793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.347918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.347962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.348140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.348171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.348302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.348333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.348458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.348490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.348597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.348627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.348744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.348774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.348966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.349000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.349122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.349153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.349258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.349290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.349461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.349611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.349643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.349768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.349799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.349990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.350023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.350278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.350314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.350482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.350513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.350621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.350652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.350835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.350866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.351129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.351162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.351286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.351318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.351491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.351521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.351719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.351751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.351869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.351900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.352059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.352092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.352222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.352254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.352359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.352390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.352584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.352615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.352790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.352822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.352974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.353008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.353193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.353225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.564 [2024-11-19 09:29:55.353351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.564 [2024-11-19 09:29:55.353382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.564 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.353600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.353630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.353871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.353901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.354193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.354227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.354352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.354383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.354565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.354763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.354794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.355007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.355040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.355212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.355242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.355363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.355395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.355605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.355721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.355753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.355946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.355988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.356109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.356140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.356320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.356352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.356542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.356573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.356784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.356814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.357000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.357033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.357173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.357283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.357314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.357501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.357533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.357656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.357688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.357860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.357891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.358006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.358039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.358289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.358328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.358450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.358482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.358599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.358631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.358751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.358782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.358897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.358928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.359049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.359081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.359189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.359219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.359316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.359348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.359539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.359570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.359738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.359771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.359968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.360002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.360104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.360135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.360324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.360355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.360561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.360592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.565 qpair failed and we were unable to recover it. 00:27:54.565 [2024-11-19 09:29:55.360771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.565 [2024-11-19 09:29:55.360802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.360991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.361024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.361205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.361235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.361334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.361365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.361470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.361500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.361737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.361769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.361965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.361998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.362265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.362295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.362476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.362507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.362714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.362746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.362915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.362945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.363076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.363108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.363292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.363322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.363526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.363557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.363728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.363759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.363999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.364031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.364136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.364166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.364297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.364327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.364421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.364452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.364639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.364670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.364849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.364879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.365047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.365080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.365206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.365237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.365365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.365396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.365557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.365588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.365758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.365789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.365894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.365930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.366182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.366214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.366333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.366363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.366482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.366513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.366676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.366706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.366885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.366916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.367114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.367147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.367343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.367373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.367507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.367538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.367636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.367666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.367849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.367879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.368047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.566 [2024-11-19 09:29:55.368080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.566 qpair failed and we were unable to recover it. 00:27:54.566 [2024-11-19 09:29:55.368266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.368296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.368468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.368498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.368620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.368652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.368813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.368845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.369024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.369056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.369172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.369203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.369398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.369429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.369597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.369627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.369798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.369829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.370009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.370042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.370144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.370175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.370293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.370326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.370497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.370526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.370656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.370688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.370803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.370833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.371079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.371112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.371286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.371317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.371419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.371449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.371642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.371672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.371846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.371876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.372054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.372086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.372197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.372228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.372428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.372649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.372679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.372851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.372881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.372987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.373018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.373257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.373288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.373429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.373459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.373653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.373689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.373806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.373836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.374074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.374107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.374287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.374317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.374441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.374471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.374654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.374687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.374859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.374889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.375028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.375060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.567 [2024-11-19 09:29:55.375254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.567 [2024-11-19 09:29:55.375285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.567 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.375452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.375483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.375596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.375626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.375805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.375835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.376006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.376038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.376324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.376355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.376471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.376502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.376607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.376638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.376830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.376860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.377000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.377034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.377289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.377320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.377503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.377533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.377634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.377665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.377837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.377868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.378034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.378066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.378183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.378214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.378398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.378429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.378545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.378575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.378779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.378809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.378937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.378980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.379092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.379123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.379310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.379340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.379523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.379553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.379808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.379838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.379967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.380000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.380240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.380270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.380465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.380495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.380783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.380814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.380993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.381026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.381220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.381250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.381503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.381533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.381777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.381808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.381990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.382028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.382145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.382176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.382361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.382392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.382578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.382609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.382783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.382813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.383051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.568 [2024-11-19 09:29:55.383082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.568 qpair failed and we were unable to recover it. 00:27:54.568 [2024-11-19 09:29:55.383318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.383348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.383475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.383505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.383626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.383657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.383845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.383876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.384070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.384102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.384287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.384317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.384516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.384547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.384736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.384766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.384966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.385000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.385186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.385217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.385421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.385451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.385644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.385675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.385855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.385886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.386066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.386099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.386272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.386302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.386548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.386577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.386811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.386842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.387091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.387123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.387306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.387337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.387506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.387537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.387788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.387818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.387998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.388030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.388154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.388184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.388365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.388395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.388572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.388602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.388785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.388815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.388938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.388985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.389164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.389193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.389427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.389458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.389714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.389745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.389862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.389892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.390084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.390116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.569 [2024-11-19 09:29:55.390308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.569 [2024-11-19 09:29:55.390338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.569 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.390508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.390539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.390725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.390761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.391024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.391055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.391173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.391203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.391373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.391405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.391609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.391638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.391753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.391783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.391987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.392019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.392295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.392325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.392502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.392532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.392714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.392744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.392916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.392946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.393149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.393179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.393356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.393386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.393567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.393597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.393798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.393829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.394000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.394032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.394200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.394230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.394358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.394388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.394565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.394596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.394780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.394810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.394989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.395021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.395192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.395222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.395406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.395438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.395632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.395663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.395849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.395880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.396117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.396149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.396385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.396415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.396536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.396566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.396750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.396780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.396964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.396996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.397256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.397286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.397459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.397489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.397674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.397705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.397827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.397857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.397966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.397997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.398194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.398225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.570 qpair failed and we were unable to recover it. 00:27:54.570 [2024-11-19 09:29:55.398393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.570 [2024-11-19 09:29:55.398424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.398594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.398624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.398876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.398907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.399196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.399228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.399349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.399384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.399554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.399585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.399774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.399805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.400066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.400098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.400363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.400393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.400526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.400556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.400722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.400753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.400993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.401026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.401302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.401332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.401597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.401627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.401879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.401909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.402102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.402135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.402333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.402364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.402645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.402675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.402796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.402958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.402990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.403156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.403187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.403367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.403397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.403563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.403593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.403703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.403733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.403836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.403866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.404034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.404067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.404302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.404332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.404499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.404529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.404643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.404673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.404967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.404999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.405237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.405267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.405382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.405413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.405583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.405613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.405798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.405828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.406063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.406094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.406280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.406311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.406553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.406584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.571 [2024-11-19 09:29:55.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.571 qpair failed and we were unable to recover it. 00:27:54.571 [2024-11-19 09:29:55.406966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.406998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.407180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.407210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.407448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.407660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.407690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.407925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.407967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.408169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.408198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.408447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.408484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.408779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.408985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.409017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.409131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.409162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.409332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.409363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.409496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.409527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.409712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.409742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.410000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.410031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.410199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.410229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.410430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.410460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.410627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.410657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.410766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.410797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.410965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.410998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.411207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.411238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.411428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.411459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.411722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.411753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.411872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.411903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.412147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.412177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.412296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.412326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.412523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.412554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.412680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.412709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.412980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.413012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.413182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.413212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.413377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.413508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.413538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.413722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.413754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.413956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.413988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.414194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.414225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.414423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.414453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.414631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.572 [2024-11-19 09:29:55.414662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.572 qpair failed and we were unable to recover it. 00:27:54.572 [2024-11-19 09:29:55.414854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.414884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.415140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.415172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.415379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.415410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.415603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.415633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.415817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.415848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.416027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.416059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.416264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.416294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.416476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.416506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.416716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.416746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.416918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.416979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.417163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.417199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.417306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.417337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.417569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.417599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.417708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.417738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.417917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.417960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.418150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.418182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.418385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.418416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.418601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.418631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.418833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.418864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.419033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.419066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.419182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.419213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.419412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.419442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.419621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.419652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.419839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.419869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.420063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.420096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.420355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.420385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.420570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.420601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.420710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.420740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.420994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.421026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.421157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.421187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.421303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.421334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.421451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.421481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.421723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.573 [2024-11-19 09:29:55.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.573 qpair failed and we were unable to recover it. 00:27:54.573 [2024-11-19 09:29:55.421966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.421999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.422130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.422162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.422326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.422356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.422482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.422512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.422694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.422725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.422969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.423002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.423136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.423167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.423471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.423502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.423626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.423655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.423891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.423922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.424118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.424150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.424282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.424312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.424508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.424539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.424704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.424734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.424993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.425027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.425209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.425241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.425428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.425459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.425594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.425630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.425803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.425833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.426095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.426127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.426314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.426346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.426469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.426499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.426763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.426794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.426979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.427011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.427274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.427304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.427576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.427607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.427889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.427920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.428100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.428131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.428298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.428328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.428516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.428547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.428684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.428715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.428917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.428959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.429072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.429103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.429348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.429378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.429612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.429643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.429824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.429856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.430041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.430074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.574 qpair failed and we were unable to recover it. 00:27:54.574 [2024-11-19 09:29:55.430270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.574 [2024-11-19 09:29:55.430300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.430533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.430564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.430682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.430713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.430961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.430994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.431174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.431205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.431384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.431416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.431654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.431684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.431858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.431889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.432129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.432161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.432342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.432372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.432557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.432587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.432701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.432733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.432911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.432942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.433230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.433262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.433518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.433548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.433796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.433827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.434008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.434041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.434218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.434249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.434523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.434553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.434747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.434777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.435034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.435072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.435276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.435306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.435445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.435475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.435603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.435633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.435745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.435776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.436017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.436048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.436236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.436267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.436444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.436475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.436611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.436641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.436758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.436788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.437059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.437092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.437276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.437307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.437524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.437554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.437677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.437708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.437843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.437875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.438081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.438114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.438290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.438320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.438516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.575 [2024-11-19 09:29:55.438547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.575 qpair failed and we were unable to recover it. 00:27:54.575 [2024-11-19 09:29:55.438652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.438682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.438915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.438946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.439072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.439104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.439352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.439382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.439641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.439671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.439844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.439875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.440026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.440201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.440233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.440468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.440498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.440693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.440724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.440910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.440940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.441236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.441269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.441527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.441557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.441790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.441821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.442057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.442090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.442330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.442361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.442642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.442673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.442860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.442890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.443070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.443102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.443286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.443317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.443515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.443546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.443743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.443773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.443892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.443929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.444177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.444208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.444388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.444419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.444682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.444713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.444824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.444855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.445063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.445095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.445267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.445298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.445556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.445587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.445768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.445798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.445926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.445968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.446173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.446206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.446378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.446409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.446530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.446560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.446737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.446767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.446891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.446922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.576 [2024-11-19 09:29:55.447102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.576 [2024-11-19 09:29:55.447134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.576 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.447391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.447421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.447655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.447685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.447851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.447881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.448012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.448045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.448256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.448287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.448455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.448486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.448663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.448695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.448860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.448891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.449091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.449124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.449256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.449285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.449479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.449510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.449635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.449671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.449909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.449939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.450131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.450162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.450282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.450312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.450413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.450444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.450649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.450680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.450800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.450935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.450976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.451232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.451262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.451520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.451550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.451739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.451769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.451971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.452005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.452174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.452204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.452316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.452346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.452533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.452565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.452742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.452773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.452899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.452930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.453085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.453116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.453321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.453352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.453607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.453638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.453838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.453869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.453992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.454025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.454237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.454267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.454505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.454744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.454775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.454961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.577 [2024-11-19 09:29:55.454994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.577 qpair failed and we were unable to recover it. 00:27:54.577 [2024-11-19 09:29:55.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.455267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.455403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.455435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.455694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.455724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.455855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.456065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.456097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.456299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.456329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.456509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.456540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.456794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.456825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.457073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.457104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.457288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.457319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.457523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.457554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.457759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.457789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.457968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.458001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.458184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.458215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.458390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.458426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.458709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.458739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.458907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.458938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.459156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.459186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.459430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.459461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.459572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.459601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.459834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.459865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.459982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.460014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.460248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.460278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.460459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.460491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.460729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.460760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.460927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.460984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.461166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.461197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.461311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.461342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.461475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.461506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.461677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.461829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.461860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.462028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.462060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.462323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.578 [2024-11-19 09:29:55.462353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.578 qpair failed and we were unable to recover it. 00:27:54.578 [2024-11-19 09:29:55.462469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.462499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.462614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.462645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.462746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.462776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.463037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.463069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.463238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.463269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.463402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.463433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.463611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.463641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.463824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.463854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.464129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.464163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.464277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.464307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.464442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.464473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.464641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.464670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.464903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.464933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.465141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.465173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.465358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.465388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.465558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.465589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.465769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.465800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.466034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.466066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.466331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.466362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.466621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.466652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.466831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.466861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.466964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.467002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.467190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.467221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.467384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.467415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.467604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.467635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.467893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.467923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.468107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.468138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.468399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.468429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.468623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.468654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.468878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.468908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.469127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.469160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.469278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.469307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.469435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.469466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.469641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.469672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.469847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.469877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.470069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.470102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.470218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.470248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.579 [2024-11-19 09:29:55.470528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.579 [2024-11-19 09:29:55.470559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.579 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.470829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.471039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.471072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.471279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.471311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.471545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.471575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.471768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.471799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.472058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.472089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.472191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.472221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.472422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.472453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.472588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.472617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.472876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.472907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.473058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.473090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.473317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.473348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.473584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.473616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.473861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.473891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.474030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.474062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.474237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.474267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.474502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.474533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.474715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.474747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.474923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.474965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.475155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.475186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.475314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.475343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.475520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.475551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.475677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.475708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.475823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.475859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.476052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.476084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.476207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.476237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.476419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.476451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.476576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.476606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.476728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.476758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.476879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.476909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.477024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.477056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.477249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.477279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.477537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.477568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.477698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.477728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.477967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.478000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.478187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.478218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.478400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.478431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.580 qpair failed and we were unable to recover it. 00:27:54.580 [2024-11-19 09:29:55.478615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.580 [2024-11-19 09:29:55.478646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.478775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.478805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.479012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.479044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.479219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.479250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.479432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.479461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.479739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.479770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.480005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.480038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.480212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.480241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.480451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.480482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.480665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.480695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.480894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.480924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.481058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.481090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.481269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.481300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.481571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.481602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.481782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.481813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.482003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.482034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.482162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.482192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.482453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.482483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.482585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.482615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.482846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.482876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.483002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.483034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.483270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.483463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.483494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.483612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.483642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.483767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.483799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.484034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.484274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.484425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.484557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.484713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.484860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.484988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.485021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.485275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.485306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.485437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.485467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.485703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.485734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.485906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.485936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.486134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.486166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.486428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.581 [2024-11-19 09:29:55.486459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.581 qpair failed and we were unable to recover it. 00:27:54.581 [2024-11-19 09:29:55.486695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.486726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.486965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.487182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.487212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.487450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.487482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.487669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.487700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.487976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.488009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.488244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.488275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.488455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.488485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.488615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.488645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.488768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.488800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.488937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.488976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.489095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.489127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.489241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.489271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.489529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.489559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.489797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.489828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.489973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.490004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.490256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.490286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.490477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.490508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.490767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.490797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.490976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.491009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.491209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.491240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.491493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.491524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.491704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.491734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.491871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.491902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.492088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.492119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.492297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.492327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.492496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.492528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.492697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.492728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.492906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.492942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.493157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.493188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.493373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.493404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.493528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.493560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.493733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.493763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.493969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.494002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.494241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.494271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.494522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.494553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.494664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.494694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.582 [2024-11-19 09:29:55.494959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.582 [2024-11-19 09:29:55.494992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.582 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.495264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.495296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.495502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.495532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.495709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.495740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.495859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.495890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.496173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.496205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.496338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.496370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.496548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.496578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.496693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.496723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.496849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.496881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.496988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.497020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.497210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.497241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.497420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.497450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.497693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.497724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.497847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.497878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.498065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.498098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.498294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.498327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.498560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.498590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.498784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.498815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.499013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.499044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.499247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.499277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.499450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.499480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.499602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.499632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.499810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.499841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.500102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.500134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.500245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.500275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.500379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.500410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.500582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.500611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.500725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.500755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.500889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.500919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.501113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.501144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.501351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.501387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.501573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.501604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.583 [2024-11-19 09:29:55.501725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.583 [2024-11-19 09:29:55.501756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.583 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.501875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.501905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.502123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.502156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.502292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.502323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.502565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.502596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.502779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.502810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.503048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.503080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.503199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.503229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.503514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.503545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.503751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.503780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.503992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.504024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.504155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.504185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.504436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.504468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.504655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.504685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.504853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.504884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.505073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.505105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.505285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.505316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.505501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.505531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.505732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.505764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.505961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.505993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.506263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.506294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.506533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.506687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.506718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.506970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.507003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.507108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.507138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.507383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.507414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.507628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.507659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.507913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.507943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.508075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.508105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.508340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.508370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.508624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.508655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.508891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.508920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.509188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.509220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.509477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.509507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.509676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.509706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.509874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.509905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.510101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.510133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.510323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.510353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.584 qpair failed and we were unable to recover it. 00:27:54.584 [2024-11-19 09:29:55.510609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.584 [2024-11-19 09:29:55.510645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.510758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.510788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.510970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.511003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.511172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.511203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.511403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.511434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.511552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.511581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.511708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.511739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.511867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.511897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.512071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.512103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.512272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.512303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.512421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.512452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.512664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.512695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.512829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.512860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.513118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.513151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.513348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.513380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.513584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.513614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.513794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.513824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.514005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.514036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.514162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.514193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.514436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.514466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.514700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.514730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.514842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.514872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.515110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.515325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.515356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.515484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.515514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.515693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.515723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.515909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.515940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.516058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.516089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.516289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.516320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.516579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.516609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.516792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.516823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.516944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.516983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.517259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.517290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.517408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.517439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.517674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.517704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.517834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.517864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 [2024-11-19 09:29:55.517972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.518005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1272217 Killed "${NVMF_APP[@]}" "$@" 00:27:54.585 [2024-11-19 09:29:55.518266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.585 [2024-11-19 09:29:55.518298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.585 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.518499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.518529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.518641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.518672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:54.586 [2024-11-19 09:29:55.518811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.518843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.519096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.519129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:54.586 [2024-11-19 09:29:55.519302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.519333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.519465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.586 [2024-11-19 09:29:55.519496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.519684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.519715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:54.586 [2024-11-19 09:29:55.519903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.519933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.586 [2024-11-19 09:29:55.520118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.520151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.520334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.520365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.520551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.520581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.520750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.520781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.520891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.520921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.521049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.521081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.521269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.521300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.521602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.521738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.521768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.522009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.522041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.522227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.522258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.522453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.522483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.522656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.522686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.522919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.522961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.523097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.523128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.523229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.523259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.523445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.523475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.523685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.523716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.523828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.523866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.524003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.524036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.524214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.524244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.524347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.524377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.524519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.524549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.524725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.524756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.524887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.524917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.525118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.525150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.525387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.586 [2024-11-19 09:29:55.525416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.586 qpair failed and we were unable to recover it. 00:27:54.586 [2024-11-19 09:29:55.525522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.525552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.525675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.525705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.525886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.525915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.526114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.526145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.526314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.526345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.526533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.526564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1272932 00:27:54.587 [2024-11-19 09:29:55.526742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.526968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.527001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1272932 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:54.587 [2024-11-19 09:29:55.527233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.527267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.527390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.527422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1272932 ']' 00:27:54.587 [2024-11-19 09:29:55.527597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.527630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.587 [2024-11-19 09:29:55.527817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.527850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.527974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.528006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:54.587 [2024-11-19 09:29:55.528192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.528224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.528397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.587 [2024-11-19 09:29:55.528430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.528687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:54.587 [2024-11-19 09:29:55.528720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.528912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.528944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.529093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.529124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.529367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.529397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.529519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.529550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.529672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.529702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.529887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.529918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.530165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.530197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.530420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.530449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.530597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.530631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.530821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.530852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.531044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.531075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.531263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.531294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.587 [2024-11-19 09:29:55.531494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.587 [2024-11-19 09:29:55.531525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.587 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.531645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.531677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.531913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.531946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.532199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.532232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.532416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.532447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.532569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.532601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.532708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.532964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.532997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.533238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.533271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.533469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.533499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.533746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.533777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.533901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.533933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.534074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.534107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.534214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.534246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.534434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.534465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.534564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.534595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.534802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.534833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.535020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.535053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.535180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.535211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.535400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.535431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.535538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.535570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.535760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.535791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.535908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.535940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.536195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.536226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.536402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.536433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.536578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.536680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.536712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.536909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.536940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.537076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.537108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.537240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.537271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.537395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.537427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.537652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.537684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.537866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.537897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.538018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.538050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.538267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.538298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.538520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.538551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.538729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.538761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.538886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.588 [2024-11-19 09:29:55.538918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.588 qpair failed and we were unable to recover it. 00:27:54.588 [2024-11-19 09:29:55.539043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.539076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.539193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.539224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.539364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.539396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.539585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.539615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.539802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.539833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.540092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.540123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.540248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.540278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.540395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.540427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.540684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.540716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.540836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.540868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.541064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.541097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.541268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.541299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.541540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.541572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.541696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.541727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.541916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.541955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.542168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.542372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.542403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.542576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.542607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.542799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.542830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.543002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.543034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.543148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.543182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.543295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.543326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.543582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.543613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.543731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.543761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.543889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.543921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.544102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.544134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.544302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.544334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.544501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.544537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.544711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.544743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.544841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.544873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.545000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.545032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.545302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.545480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.545510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.545692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.545724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.545845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.545876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.545993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.546025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.546217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.546247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.589 [2024-11-19 09:29:55.546443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.589 [2024-11-19 09:29:55.546473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.589 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.546738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.546770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.547009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.547041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.547154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.547186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.547313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.547344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.547466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.547497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.547745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.547777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.547945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.547986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.548121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.548153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.548393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.548423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.548533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.548565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.548693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.548725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.548837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.548867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.549103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.549136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.549244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.549276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.549395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.549426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.549545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.549576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.549762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.549795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.549975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.550027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.550140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.550171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.550368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.550400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.550516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.550548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.550790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.550820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.551014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.551045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.551154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.551186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.551368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.551400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.551584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.551615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.551790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.551820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.552003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.552179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.552209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.552313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.552350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.552473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.552503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.552676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.552706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.552969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.553003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.553127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.553158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.553420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.553451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.553620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.553650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.553768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.553800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.590 qpair failed and we were unable to recover it. 00:27:54.590 [2024-11-19 09:29:55.554036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.590 [2024-11-19 09:29:55.554068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.554189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.554220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.554337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.554368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.554567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.554598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.554801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.554833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.554964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.554997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.555196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.555229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.555346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.555377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.555499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.555530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.555634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.555838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.555869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.555987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.556019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.556223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.556255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.556521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.556552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.556736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.556768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.556883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.556915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.557147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.557180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.557358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.557389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.557503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.557533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.557733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.557768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.557872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.557904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.558155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.558189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.558298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.558330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.558505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.558537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.558807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.558838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.558973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.559007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.559206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.559237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.559429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.559462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.559724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.559756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.559873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.559904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.560035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.560068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.560194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.560226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.560365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.560403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.560508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.560539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.560658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.560689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.560862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.560894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.561085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.561116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.561292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.591 [2024-11-19 09:29:55.561323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.591 qpair failed and we were unable to recover it. 00:27:54.591 [2024-11-19 09:29:55.561502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.561534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.561728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.561759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.561934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.561975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.562166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.562197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.562329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.562361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.562534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.562564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.562754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.562786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.562992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.563023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.563211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.563242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.563429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.563460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.563562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.563592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.563760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.563791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.564030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.564062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.564298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.564329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.564451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.564482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.564665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.564696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.564882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.564913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.565191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.565223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.565408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.565440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.565720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.565752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.565935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.565976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.566223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.566255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.566458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.566487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.566604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.566635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.566751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.566782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.566897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.566927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.567059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.567091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.567287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.567318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.567559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.567590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.567770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.567802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.567917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.567960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.568093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.568125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.568384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.592 [2024-11-19 09:29:55.568415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.592 qpair failed and we were unable to recover it. 00:27:54.592 [2024-11-19 09:29:55.568599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.568631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.568753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.568790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.568971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.569004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.569184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.569215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.569384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.569415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.569521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.569552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.569721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.569753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.569855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.569886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.570004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.570208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.570240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.570429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.570460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.570592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.570623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.570812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.570843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.570967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.571000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.571248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.571280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.571472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.571503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.571687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.571718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.571993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.572026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.572156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.572187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.572427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.572458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.572750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.572784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.572974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.573007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.573120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.573151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.573339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.573370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.573490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.573521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.573768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.573798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.574003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.574035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.574161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.574193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.574302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.574514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.574546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.574793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.574966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.574997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.575191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.575222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.575393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.575424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.575662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.575693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.575865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.575897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.576146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.593 [2024-11-19 09:29:55.576180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.593 qpair failed and we were unable to recover it. 00:27:54.593 [2024-11-19 09:29:55.576315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.576345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.576523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.576554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.576724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.576940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.576985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.577110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.577141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.577335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.577367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.577489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.577520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.577729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.577761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.577859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.577901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.578102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.578103] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:27:54.594 [2024-11-19 09:29:55.578138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 [2024-11-19 09:29:55.578156] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.578340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.578371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.578484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.578513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.578760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.578790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.578913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.578945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.579078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.579110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.579346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.579377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.579572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.579603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.579797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.579829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.579939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.580001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.580189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.580221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.580334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.580365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.580572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.580604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.580724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.580755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.580958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.580991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.581177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.581209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.581337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.581370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.581552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.581583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.581780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.581813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.581917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.581962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.582082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.582113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.582228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.582261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.582447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.582479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.582651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.582683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.582811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.582843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.583018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.583050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.583219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.583250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.594 qpair failed and we were unable to recover it. 00:27:54.594 [2024-11-19 09:29:55.583370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.594 [2024-11-19 09:29:55.583401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.583571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.583602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.583726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.583756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.583865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.583896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.584118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.584171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.584388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.584428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.584560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.584593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.584784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.584824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.585019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.585053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.585230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.585262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.585365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.585397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.585505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.585537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.585735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.585768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.585966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.586000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.586202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.586253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.586476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.586511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.586694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.586726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.586832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.586863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.587050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.587084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.587258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.587290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.587476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.587507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.587783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.587816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.595 [2024-11-19 09:29:55.588055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.595 [2024-11-19 09:29:55.588098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.595 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.588336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.588395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.588571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.588630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.588863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.588928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.589254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.589317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.589448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.589481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.589701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.589804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.875 [2024-11-19 09:29:55.589838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.875 qpair failed and we were unable to recover it. 00:27:54.875 [2024-11-19 09:29:55.590043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.590078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.590354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.590388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.590531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.590566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.590766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.590803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.591002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.591049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.591175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.591208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.591323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.591370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.591628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.591663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.591768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.591800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.591962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.591999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.592183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.592223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.592423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.592469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.592652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.592683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.592910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.592963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.593160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.593196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.593318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.593350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.593519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.593551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.593672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.593712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.593898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.593929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.594182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.594214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.594411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.594443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.594688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.594720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.594968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.595002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.595121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.595153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.595265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.595298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.595498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.595531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.595656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.595688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.595810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.595843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.596083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.596117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.596389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.596564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.596596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.596785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.596817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.596940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.596983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.597167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.597199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.597297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.597329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.597433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.597464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.597635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.597666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.876 qpair failed and we were unable to recover it. 00:27:54.876 [2024-11-19 09:29:55.597780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.876 [2024-11-19 09:29:55.597811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.597957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.597990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.598099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.598130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.598249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.598280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.598480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.598513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.598718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.598749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.598871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.598902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.599094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.599129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.599250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.599281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.599390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.599422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.599608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.599640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.599766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.599797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.599918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.599962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.600148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.600181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.600301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.600332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.600435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.600467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.600642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.600673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.600965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.600999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.601191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.601223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.601330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.601361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.601467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.601504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.601700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.601732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.601865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.601897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.602026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.602061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.602246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.602278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.602451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.602483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.602657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.602689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.602806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.602838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.602946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.602989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.603168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.603201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.603339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.603371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.603558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.603589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.603832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.603864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.604056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.604091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.604211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.604244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.604520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.604551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.604663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.604696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.604913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.877 [2024-11-19 09:29:55.604945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.877 qpair failed and we were unable to recover it. 00:27:54.877 [2024-11-19 09:29:55.605063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.605096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.605219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.605251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.605468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.605499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.605765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.605797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.605966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.606000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.606129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.606161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.606275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.606307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.606455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.606486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.606656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.606688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.606881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.606912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.607112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.607146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.607272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.607303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.607405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.607437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.607541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.607572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.607710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.607853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.607883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.608007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.608040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.608234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.608266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.608373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.608404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.608659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.608690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.608870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.608902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.609101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.609134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.609249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.609287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.609401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.609441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.609625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.609657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.609899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.609931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.610148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.610179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.610304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.610336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.610600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.610630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.610803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.610834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.610969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.611003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.611121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.611153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.611321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.611353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.611625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.611661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.611876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.611908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.612160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.612231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.612395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.612431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.878 qpair failed and we were unable to recover it. 00:27:54.878 [2024-11-19 09:29:55.612558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.878 [2024-11-19 09:29:55.612589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.612777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.612814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.612964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.612996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.613176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.613208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.613438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.613469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.613588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.613618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.613862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.613899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.614121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.614156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.614343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.614373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.614546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.614577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.614709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.614739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.614932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.614981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.615118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.615154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.615266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.615297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.615402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.615433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.615557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.615588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.615718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.615750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.615857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.615888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.616004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.616037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.616151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.616182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.616302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.616334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.616438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.616468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.616665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.616697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.616894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.616935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.617150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.617174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.617312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.617556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.617576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.617715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.617735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.617909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.617929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.618101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.618122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.618221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.618241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.618458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.618478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.618625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.618646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.618738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.618758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.618909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.618929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.619033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.619053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.619141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.619161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.619249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.879 [2024-11-19 09:29:55.619270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.879 qpair failed and we were unable to recover it. 00:27:54.879 [2024-11-19 09:29:55.619463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.619484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.619580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.619600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.619689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.619709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.619851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.619871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.619971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.620076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.620096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.620280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.620301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.620387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.620407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.620559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.620579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.620736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.620756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.620925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.620945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.621059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.621079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.621246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.621266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.621432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.621452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.621695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.621716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.621862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.621881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.622929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.622955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.623037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.623058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.623217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.623237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.623387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.623407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.623553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.623573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.623744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.623768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.623929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.623956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.624114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.624134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.624338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.624359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.880 qpair failed and we were unable to recover it. 00:27:54.880 [2024-11-19 09:29:55.624433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.880 [2024-11-19 09:29:55.624453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.624530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.624550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.624633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.624653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.624750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.624770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.624916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.624936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.625191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.625211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.625335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.625362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.625622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.625649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.625904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.626095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.626235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.626262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.626418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.626444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.626608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.626748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.626774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.626886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.626912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.627083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.627110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.627265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.627293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.627535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.627561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.627729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.627755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.627932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.627977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.628958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.629237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.629263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.629372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.629399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.629489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.629515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.629675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.629701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.629792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.629818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.629996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.630024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.630127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.630154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.630311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.630338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.630433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.630459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.630565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.630597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.881 qpair failed and we were unable to recover it. 00:27:54.881 [2024-11-19 09:29:55.630831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.881 [2024-11-19 09:29:55.630857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.630986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.631013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.631135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.631162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.631310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.631537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.631562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.631765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.631792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.631880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.631907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.632086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.632112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.632211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.632237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.632403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.632429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.632527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.632553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.632734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.632760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.632863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.633164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.633191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.633352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.633379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.633500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.633525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.633778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.633805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.633964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.633991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.634100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.634126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.634297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.634323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.634520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.634546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.634732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.634758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.634857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.634883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.635046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.635074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.635165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.635191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.635363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.635405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.635633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.635665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.635801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.635832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.636023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.636056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.636229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.636261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.636375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.636407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.636594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.636626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.636816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.636847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.637019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.637051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.637151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.637183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.637384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.637415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.637620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.637651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.637888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.882 [2024-11-19 09:29:55.637919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.882 qpair failed and we were unable to recover it. 00:27:54.882 [2024-11-19 09:29:55.638183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.638215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.638398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.638429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.638677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.638710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.638893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.638924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.639126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.639158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.639347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.639378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.639568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.639599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.639815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.639846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.640083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.640116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.640303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.640335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.640525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.640667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.640698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.640802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.640832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.641006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.641038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.641226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.641258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.641409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.641439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.641623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.641654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.641894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.641925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.642110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.642142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.642278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.642308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.642486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.642517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.642707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.642739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.642935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.642975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.643087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.643118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.643363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.643394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.643582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.643613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.643820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.643851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.644005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.644038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.644228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.644265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.644456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.644487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.644665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.644696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.644827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.644857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.645047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.645079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.645260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.645292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.645547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.645579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.645717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.645748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.645928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.645969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.883 [2024-11-19 09:29:55.646159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.883 [2024-11-19 09:29:55.646191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.883 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.646361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.646392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.646503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.646535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.646748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.646779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.646909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.646940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.647135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.647167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.647411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.647443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.647618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.647650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.647772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.647802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.648060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.648093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.648230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.648260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.648382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.648413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.648619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.648649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.648783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.648816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.649007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.649040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.649298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.649328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.649509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.649541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.649714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.649744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.649870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.649902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.650082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.650114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.650352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.650383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.650483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.650514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.650701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.650732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.650997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.651034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.651318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.651350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.651479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.651511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.651749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.651786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.652003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.652039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.652225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.652258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.652454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.652486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.652605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.652639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.652742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.652794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.652907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.652937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.653108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.653141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.653378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.653410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.653581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.653613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.653793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.653826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.884 qpair failed and we were unable to recover it. 00:27:54.884 [2024-11-19 09:29:55.653997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.884 [2024-11-19 09:29:55.654032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.654293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.654327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.654466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.654653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.654686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.654873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.654905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.655102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.655135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.655323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.655355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.655469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.655501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.655686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.655718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.655907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.655939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.656097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.656130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.656299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.656331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.656504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.656536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.656741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.656772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.656941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.656997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.657192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.657224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.657461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.657493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.657757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.657788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.657968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.658001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.658135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.658166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.658431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.658463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.658584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.658804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.658836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.658945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.658988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.659172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.659205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.659399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.659429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.659620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.659652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.659761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.659792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.660000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.660035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.660213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.660244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.660421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.660453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.660577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.660608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.660786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.660817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.661027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.661060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.885 [2024-11-19 09:29:55.661239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.885 [2024-11-19 09:29:55.661276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.885 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.661447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:54.886 [2024-11-19 09:29:55.661544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.661709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.661740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.661911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.661944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.662154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.662186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.662356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.662388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.662654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.662685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.662867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.662900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.663028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.663059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.663239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.663272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.663441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.663473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.663734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.663765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.663872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.663903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.664031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.664070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.664275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.664307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.664487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.664519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.664696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.664728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.664963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.664997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.665168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.665200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.665392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.665422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.665555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.665587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.665763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.665794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.665981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.666014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.666254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.666286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.666464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.666495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.666665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.666697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.666869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.666901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.667201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.667235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.667354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.667385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.667577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.667609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.667788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.667821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.668059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.668092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.668330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.668363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.668548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.668579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.668819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.668852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.669036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.669071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.669256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.669288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.886 qpair failed and we were unable to recover it. 00:27:54.886 [2024-11-19 09:29:55.669551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.886 [2024-11-19 09:29:55.669584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.669710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.669742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.669924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.669969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.670102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.670136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.670326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.670358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.670568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.670600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.670792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.670824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.671100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.671134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.671318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.671350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.671521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.671554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.671738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.671770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.671944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.671985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.672171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.672205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.672323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.672356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.672477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.672510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.672714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.672745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.672918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.672965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.673237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.673273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.673454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.673485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.673659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.673690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.673791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.673822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.674047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.674080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.674263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.674294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.674480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.674513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.674638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.674670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.674858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.674889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.675005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.675039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.675292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.675336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.675531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.675564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.675833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.675865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.676055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.676089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.676221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.676252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.676436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.676467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.676643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.676675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.676793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.676824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.676927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.676987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.677183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.677215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.887 qpair failed and we were unable to recover it. 00:27:54.887 [2024-11-19 09:29:55.677323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.887 [2024-11-19 09:29:55.677355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.677534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.677566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.677687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.677718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.677983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.678016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.678205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.678237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.678450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.678629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.678674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.678808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.678841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.679013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.679047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.679167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.679199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.679367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.679400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.679604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.679635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.679813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.679845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.680016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.680050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.680255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.680287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.680524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.680557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.680840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.680871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.680987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.681021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.681269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.681300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.681563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.681601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.681724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.681756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.681994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.682027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.682243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.682274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.682540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.682572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.682758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.682789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.682981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.683014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.683201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.683232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.683409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.683441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.683652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.683684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.683876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.683908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.684022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.684056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.684165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.684196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.684430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.684462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.684652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.684685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.684896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.684930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.685081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.685113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.685297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.685329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.685433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.888 [2024-11-19 09:29:55.685465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.888 qpair failed and we were unable to recover it. 00:27:54.888 [2024-11-19 09:29:55.685656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.685689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.685807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.685838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.686075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.686110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.686355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.686388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.686568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.686600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.686862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.686893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.687138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.687172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.687409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.687442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.687681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.687756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.687935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.688020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.688242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.688278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.688458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.688492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.688685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.688718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.688845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.688878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.689049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.689083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.689288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.689320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.689513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.689546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.689722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.689941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.689985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.690164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.690197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.690432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.690464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.690572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.690608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.690859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.690893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.691162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.691196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.691500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.691533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.691719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.691752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.691935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.691977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.692165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.692199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.692385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.692416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.692655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.692687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.692984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.693019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.693204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.693236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.693405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.889 [2024-11-19 09:29:55.693437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.889 qpair failed and we were unable to recover it. 00:27:54.889 [2024-11-19 09:29:55.693691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.693724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.693898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.693931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.694083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.694118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.694353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.694385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.694571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.694603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.694839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.694872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.695107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.695140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.695443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.695475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.695726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.695758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.696013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.696046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.696235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.696266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.696474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.696507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.696686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.696719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.696904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.696936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.697185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.697218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.697463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.697495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.697631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.697664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.697853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.697885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.698076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.698110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.698222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.698253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.698427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.698459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.698699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.698730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.698998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.699033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.699235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.699268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.699462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.699493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.699625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.699657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.699854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.699886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.699993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.700025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.700218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.700250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.700426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.700463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.700703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.700735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.700902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.700934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.701072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.701105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.701218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.701251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.701486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.701519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.701725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.701757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.701889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.701922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.702050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.890 [2024-11-19 09:29:55.702083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.890 qpair failed and we were unable to recover it. 00:27:54.890 [2024-11-19 09:29:55.702278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.702311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.702572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.702604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.702725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.702758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.702872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.702906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.703097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.703131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.703274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.703308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.703478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.703510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.703692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.703724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.703827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.704086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.891 [2024-11-19 09:29:55.704099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.704115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.891 [2024-11-19 09:29:55.704125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.891 [2024-11-19 09:29:55.704132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.891 [2024-11-19 09:29:55.704137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.891 [2024-11-19 09:29:55.704132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.704371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.704402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.704514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.704546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.704687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.704720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.704902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.704934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.705127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.705160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.705355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.705387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.705493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.705530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.705704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.705740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.705716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:54.891 [2024-11-19 09:29:55.705872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.705807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:54.891 [2024-11-19 09:29:55.705890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:54.891 [2024-11-19 09:29:55.705906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.705891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:54.891 [2024-11-19 09:29:55.706175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.706207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.706394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.706427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.706712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.706744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.706915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.706972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.707167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.707199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.707372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.707405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.707615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.707649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.707790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.707822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.708000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.708035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.708227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.708259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.708455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.708488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.708695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.708727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.708903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.708935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.709143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.709176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.709418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.709452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.891 [2024-11-19 09:29:55.709631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.891 [2024-11-19 09:29:55.709663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.891 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.709853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.709885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.710148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.710181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.710356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.710389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.710505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.710538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.710693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.710807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.710839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.711044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.711077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.711197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.711236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.711473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.711505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.711645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.711678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.711937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.711981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.712113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.712146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.712327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.712360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.712604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.712636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.712841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.712873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.713044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.713078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.713277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.713308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.713562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.713595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.713872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.713905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.714085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.714119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.714396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.714429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.714656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.714688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.714878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.714910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.715117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.715151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.715331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.715363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.715659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.715691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.715891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.715922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.716141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.716174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.716351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.716383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.716520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.716552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.716684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.716717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.716923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.716966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.717102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.717135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.717379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.717411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.717576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.717616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.717798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.717832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.717968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.718002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.892 [2024-11-19 09:29:55.718141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.892 [2024-11-19 09:29:55.718173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.892 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.718323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.718447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.718478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.718657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.718689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.718800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.718832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.718974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.719009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.719123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.719156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.719366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.719398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.719531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.719563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.719751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.719784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.720045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.720081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.720276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.720311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.720477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.720509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.720735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.720767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.721006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.721041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.721215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.721248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.721494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.721528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.721744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.721778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.721965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.722000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.722273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.722307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.722437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.722468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.722690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.722725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.722912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.722969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.723229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.723263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.723399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.723431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.723609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.723642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.723909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.723941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.724134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.724166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.724454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.724486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.724608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.724641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.724812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.724847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.725034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.725068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.725282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.725315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.725457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.725489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.725689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.725723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.725852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.725885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.726080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.726113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.726232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.726264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.893 [2024-11-19 09:29:55.726481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.893 [2024-11-19 09:29:55.726547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.893 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.726736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.726770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.726967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.727111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.727142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.727314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.727345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.727478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.727511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.727716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.727748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.728007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.728041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.728157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.728188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.728293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.728323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.728498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.728530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.728654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.728684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.728868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.728899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.729118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.729161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.729352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.729384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.729557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.729590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.729720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.729751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.729932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.729970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.730165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.730196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.730322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.730352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.730589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.730622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.730806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.730837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.731072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.731106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.731322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.731356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.731463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.731495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.731752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.731785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.731991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.732029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.732250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.732283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.732567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.732602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.732733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.732764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.732956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.732991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.733137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.733172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.733372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.733406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.733649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.894 [2024-11-19 09:29:55.733683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.894 qpair failed and we were unable to recover it. 00:27:54.894 [2024-11-19 09:29:55.733864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.733900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.734037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.734070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.734264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.734296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.734410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.734620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.734651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.734830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.734860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.735018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.735074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.735208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.735240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.735363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.735395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.735585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.735617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.735724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.735755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.735996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.736029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.736238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.736270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.736401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.736433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.736621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.736652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.736756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.736787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.736905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.736937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.737163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.737195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.737455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.737486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.737615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.737646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.737761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.737794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.738074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.738109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.738317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.738349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.738460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.738492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.738762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.738795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.738981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.739014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.739195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.739227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.739398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.739431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.739617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.739650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.739891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.739926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.740108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.740140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.740312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.740345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.740549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.740583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.740830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.740871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.741072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.741108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.741376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.741412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.741534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.741569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.741821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.895 [2024-11-19 09:29:55.741861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.895 qpair failed and we were unable to recover it. 00:27:54.895 [2024-11-19 09:29:55.741975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.742009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.742272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.742306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.742438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.742469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.742586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.742618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.742803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.742835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.743007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.743039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.743218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.743249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.743489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.743520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.743636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.743667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.743865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.743897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.744176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.744208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.744477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.744508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.744766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.744797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.744985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.745020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.745207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.745239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.745429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.745461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.745646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.745679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.745839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.745873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.746137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.746174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.746321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.746354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.746527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.746559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.746750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.746782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.746962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.747002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.747193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.747226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.747393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.747424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.747663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.747697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.747883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.747915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.748111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.748144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.748315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.748346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.748556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.748678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.748710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.748909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.748941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.749207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.749240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.749364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.749396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.749523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.749554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.749822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.749856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.749988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.750025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.750164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.896 [2024-11-19 09:29:55.750196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.896 qpair failed and we were unable to recover it. 00:27:54.896 [2024-11-19 09:29:55.750392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.750424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.750599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.750631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.750807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.750839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.751021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.751055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.751317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.751351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.751475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.751507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.751656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.751689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.751888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.751922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.752055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.752087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.752271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.752305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.752484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.752515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.752718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.752759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.752876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.752908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.753094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.753128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.753364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.753397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.753576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.753608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.753822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.753855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.753984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.754017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.754178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.754210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.754336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.754368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.754555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.754587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.754701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.754733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.754924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.754964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.755178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.755210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.755381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.755412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.755618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.755653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.755898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.755931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.756095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.756128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.756382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.756414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.756607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.756639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.756878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.756910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.757241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.757276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.757473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.757505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.757627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.757659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.757761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.757792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.758202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.758248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.758467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.758501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.758680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.758712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.897 qpair failed and we were unable to recover it. 00:27:54.897 [2024-11-19 09:29:55.758984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.897 [2024-11-19 09:29:55.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.759204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.759236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.759338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.759370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.759587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.759620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.759790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.759822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.759996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.760030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.760268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.760303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.760559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.760591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.760853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.760885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.761012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.761044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.761235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.761266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.761443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.761476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.761712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.761746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.762007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.762040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.762316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.762400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.762628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.762686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.762938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.762986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.763170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.763202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.763436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.763467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.763588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.763620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.763892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.763923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.764146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.764178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.764310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.764340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.764463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.764493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.764685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.764716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.764884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.764915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.765045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.765078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.765264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.765294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.765468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.765499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.765616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.765646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.765753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.765783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.765972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.766005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.766194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.766225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.766355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.898 [2024-11-19 09:29:55.766386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.898 qpair failed and we were unable to recover it. 00:27:54.898 [2024-11-19 09:29:55.766506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.766536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.766746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.766777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.767024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.767055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.767172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.767202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.767406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.767437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.767614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.767644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.767843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.768090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.768122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.768361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.768392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.768517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.768547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.768745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.768775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.769013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.769045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.769243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.769274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.769446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.769476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.769654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.769684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.769816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.769846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.770014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.770047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.770182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.770212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.770391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.770422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.770534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.770565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.770757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.770800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.770917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.770956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.771232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.771353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.771384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.771500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.771531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.771723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.771753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.771915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.771946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.772161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.772193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.772449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.772479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.772732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.772763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.772877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.772908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.773169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.773202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.773400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.773432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.773617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.773647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.773834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.773865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.773987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.774020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.774130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.774162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.899 [2024-11-19 09:29:55.774331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.899 [2024-11-19 09:29:55.774362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.899 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.774462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.774493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.774609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.774640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.774778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.774809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.774997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.775029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.775264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.775295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.775465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.775502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.775640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.775671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.775797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.775827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.776048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.776082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.776199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.776230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.776491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.776522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.776727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.776758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.776869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.776899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.777142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.777175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.777299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.777328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.777448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.777477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.777674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.777705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.777925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.778170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.778203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.778444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.778476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.778671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.778702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.778908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.778938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.779215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.779499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.779530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.779727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.779759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.779969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.780001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.780187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.780219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.780326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.780356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.780566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.780596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.780861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.780893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.781074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.781105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.781239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.781271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.781375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.781406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.781607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.781639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.781833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.781863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.782042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.782073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.782205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.900 [2024-11-19 09:29:55.782237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.900 qpair failed and we were unable to recover it. 00:27:54.900 [2024-11-19 09:29:55.782440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.782472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.782599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.782629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.782868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.782899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.783082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.783115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.783295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.783326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.783534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.783565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.783754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.783785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.783987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.784021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.784209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.784240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.784434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.784464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.784647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.784677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.784927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.784966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.785098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.785129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.785297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.785327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.785504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.785534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.785658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.785688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.785875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.785905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.786168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.786200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.786374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.786405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.786581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.786612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.786716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.786746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.786954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.786987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.787113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.787142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.787325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.787354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.787541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.787573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.787839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.787876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.788007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.788038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.788143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.788173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.788293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.788323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.788503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.788533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.788768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.788799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.788936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.788976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.789143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.789173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.789388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.789419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.789595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.789624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.789737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.789767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.789880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.789909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.901 [2024-11-19 09:29:55.790051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.901 [2024-11-19 09:29:55.790084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.901 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.790211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.790242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.790512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.790542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.790729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.790759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.790966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.790998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.791176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.791206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.791447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.791477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.791591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.791620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.791732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.791762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.791989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.792023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.792127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.792159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.792328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.792359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.792596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.792626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.792818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.792848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.792965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.792995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.793195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.793226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.793402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.793432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.793550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.793580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.793775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.793806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.793903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.793933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.794184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.794215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.794348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.794378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.794554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.794583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.794703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.794733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.794868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.794900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.795019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.795050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.795163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.795193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.795375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.795405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.795643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.795679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.795853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.795883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.796087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.796119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.796248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.796278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.796385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.796415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.796589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.796619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.796880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.796912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.797107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.797140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.797314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.797344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.797535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.797566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.902 qpair failed and we were unable to recover it. 00:27:54.902 [2024-11-19 09:29:55.797679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.902 [2024-11-19 09:29:55.797709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.797920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.797958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.798152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.798183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.798312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.798342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.798580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.798610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.798794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.798824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.798997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.799028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.799154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.799183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.799300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.799330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.799524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.799555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.799760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.799790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.799984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.800016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.800136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.800166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.800426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.800456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.800714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.800744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.800849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.800879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.801073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.801104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.801218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.801249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.801381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.801410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.801583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.801613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.801727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.801758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.801881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.801910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.802126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.802158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.802274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.802304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.802536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.802566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.802696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.802727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.802907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.802938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.803139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.803171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.803338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.803368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.803568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.803598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.803785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.803821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.804017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.804049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.804171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.804202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.804388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.903 [2024-11-19 09:29:55.804418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.903 qpair failed and we were unable to recover it. 00:27:54.903 [2024-11-19 09:29:55.804587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.804618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.804791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.804821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.804934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.804990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.805244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.805275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.805533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.805564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.805747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.805777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.805981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.806251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.806281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.806539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.806570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.806693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.806723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.806849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.806879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.807145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.807177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.807342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.807372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.807497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.807528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.807736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.807767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.807967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.808000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.808182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.808213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.808421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.808450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.808623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.808653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:54.904 [2024-11-19 09:29:55.808864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.808896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.809087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.809120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:54.904 [2024-11-19 09:29:55.809307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.809339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.904 [2024-11-19 09:29:55.809574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.809645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.809783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.809819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:54.904 [2024-11-19 09:29:55.810077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.810116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.904 [2024-11-19 09:29:55.810348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.810585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.810616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.810750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.810783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.811020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.811052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.811180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.811211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.811471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.811502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.811688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.811719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.811887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.811918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.812052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.812083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.812213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.812252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.904 qpair failed and we were unable to recover it. 00:27:54.904 [2024-11-19 09:29:55.812436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.904 [2024-11-19 09:29:55.812468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.812657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.812688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.812867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.812898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.813144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.813176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.813353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.813384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.813619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.813650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.813824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.813855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.814033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.814065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.814252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.814282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.814499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.814530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.814772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.814803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.815070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.815102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.815293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.815324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.815449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.815480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.815599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.815628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.815866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.815896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.816089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.816121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.816253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.816452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.816484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.816730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.816761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.817001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.817033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.817268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.817301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.817489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.817520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.817712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.817742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.817865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.817896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.818081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.818113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.818328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.818359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.818555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.818585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.818710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.818741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.819013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.819046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.819242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.819272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.819402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.819433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.819684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.819714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.819860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.819892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.820093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.820130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.820334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.820368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.820537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.820569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.820758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.905 [2024-11-19 09:29:55.820789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.905 qpair failed and we were unable to recover it. 00:27:54.905 [2024-11-19 09:29:55.821035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.821068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.821251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.821290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.821483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.821515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.821633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.821664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.821919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.821962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.822090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.822121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.822357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.822388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.822525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.822556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.822736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.822767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.822887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.822918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.823134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2304af0 is same with the state(6) to be set 00:27:54.906 [2024-11-19 09:29:55.823371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.823420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.823616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.823650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.823897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.823929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.824126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.824157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.824381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.824414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.824605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.824636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.824747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.824778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.824967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.825000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.825192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.825223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.825365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.825395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.825510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.825543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.825777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.825807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.825998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.826032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.826231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.826263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.826477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.826508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.826688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.826719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.826832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.826863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.827044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.827077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.827303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.827335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.827463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.827495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.827674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.827706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.827843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.827875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.828013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.828045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.828283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.828315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.828499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.828530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.828678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.828709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.828844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.906 [2024-11-19 09:29:55.828875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.906 qpair failed and we were unable to recover it. 00:27:54.906 [2024-11-19 09:29:55.829001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.829034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.829300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.829332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.829527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.829558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.829766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.829798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.829920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.829968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.830090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.830121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.830249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.830280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.830459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.830490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.830742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.830775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.831038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.831071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.831262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.831293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.831405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.831436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.831548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.831579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.831764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.831796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.831910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.831941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.832056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.832088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.832199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.832231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.832346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.832377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.832569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.832601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.832776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.832808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.832997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.833029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.833144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.833176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.833357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.833388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.833518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.833549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.833665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.833696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.833830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.833860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.834039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.834071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.834183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.834214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.834321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.834352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.834455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.834486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.834590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.834621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.834812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.834855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.835032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.835065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.835305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.835337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.835529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.835559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.835673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.835704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.835803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.835835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.835964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.907 [2024-11-19 09:29:55.835996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.907 qpair failed and we were unable to recover it. 00:27:54.907 [2024-11-19 09:29:55.836115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.836146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.836388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.836420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.836660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.836691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.836806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.836837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.837011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.837044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.837164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.837194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.837377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.837407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.837546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.837578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.837724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.837856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.837887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.838115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.838148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.838262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.838293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.838471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.838502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.838686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.838718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.838898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.838928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.839096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.839129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.839304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.839334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.839510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.839541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.839720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.839752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.839853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.839884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.840071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.840103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.840226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.840257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.840387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.840418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.840574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.840694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.840726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.840901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.840932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.841126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.841159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.841340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.841371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.841578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.841609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.841847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.841879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.842142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.842175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.842355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.842386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.908 [2024-11-19 09:29:55.842683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.908 qpair failed and we were unable to recover it. 00:27:54.908 [2024-11-19 09:29:55.842806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.842837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.843036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.843073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.843194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.843226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.843365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.843396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.843508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.843541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.843717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.843747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.843925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.843967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.844155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.844186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.844300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.844331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.844436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.844467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.844706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.844738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.844849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.844881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.845057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.845090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.845197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.845229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.845348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.845387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b9 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.909 0 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.845521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.845553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.845724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.845757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.846053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.846086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.909 [2024-11-19 09:29:55.846194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.846225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.846351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.846383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.846497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.846528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.846747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.846778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.846967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.846999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.847193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.847224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.847418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.847449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.847575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.847605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.847743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.847774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.847899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.847930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.848055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.848085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.848275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.848306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.848502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.848534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.848721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.848751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.848905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.849154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.849187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.849304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.849334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.909 qpair failed and we were unable to recover it. 00:27:54.909 [2024-11-19 09:29:55.849577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.909 [2024-11-19 09:29:55.849608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.849790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.849821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.849936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.849981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.850152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.850183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.850322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.850363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae98000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.850551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.850587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.850780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.850812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.850928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.850968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.851148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.851179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.851298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.851329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.851457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.851488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.851687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.851876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.851908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.852025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.852057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.852230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.852260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.852514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.852545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.852660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.852691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.852874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.852904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.853095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.853154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.853293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.853327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.853459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.853492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.853683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.853715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.853896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.853927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.854072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.854105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.854304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.854335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.854535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.854566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.854759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.854790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.854974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.855007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.855180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.855211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.855332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.855363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.855493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.855524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.855707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.855743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.855924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.855963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.856158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.856190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.856365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.856397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.856609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.856639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.856763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.856795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.856981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.910 [2024-11-19 09:29:55.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.910 qpair failed and we were unable to recover it. 00:27:54.910 [2024-11-19 09:29:55.857278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.857309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.857477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.857507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.857724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.857755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.857883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.857913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.858041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.858074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.858314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.858345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.858539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.858576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.858747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.858777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.858970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.859003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.859201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.859232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.859418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.859448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.859629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.859660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.859910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.859940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.860140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.860171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.860287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.860317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.860446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.860477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.860600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.860632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.860744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.860774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.860878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.860909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.861106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.861139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.861375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.861406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.861520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.861550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.861738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.861769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.861872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.861902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.862050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.862107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.862233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.862264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.862443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.862475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.862648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.862678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.862798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.862829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.863018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.863050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.863232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.863262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.863369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.863399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.863623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.863653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.863834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.863865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.864067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.864098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.864336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.864366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.864486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.864517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.864778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.911 [2024-11-19 09:29:55.864808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.911 qpair failed and we were unable to recover it. 00:27:54.911 [2024-11-19 09:29:55.864985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.865017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.865208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.865237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.865358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.865389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.865602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.865634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.865812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.865842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.866022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.866054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.866163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.866193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.866325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.866355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.866489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.866525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.866640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.866671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.866803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.866834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.867023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.867061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.867184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.867214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.867339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.867369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.867559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.867589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.867760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.867789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.867967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.867999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.868259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.868290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.868462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.868492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.868617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.868648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.868907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.868939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.869057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.869088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.869219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.869250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.869498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.869528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.869638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.869669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.869873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.869904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.870029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.870161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.870191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.870439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.870470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.870738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.870768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.870936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.870978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.871250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.871280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.871523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.871555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.912 [2024-11-19 09:29:55.871661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.912 [2024-11-19 09:29:55.871692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.912 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.871816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.871846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fae9c000b90 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.872040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.872077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.872262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.872294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.872419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.872450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.872646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.872677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.872852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.872883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.873102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.873135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.873305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.873337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.873461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.873492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.873675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.873706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.873831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.873861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.874033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.874066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.874263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.874295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.874412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.874444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.874557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.874588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.874834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.874865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.874974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.875006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.875175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.875207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.875397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.875427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.875598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.875629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.875831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.875862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.876034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.876066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.876275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.876306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.876477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.876508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.876752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.876782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.876977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.877009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.877255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.877288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.877496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.877528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.877718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.877756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.877976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.878008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.878274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.878306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.878520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.878552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.878740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.878771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.878888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.878920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.879135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.879167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.879338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.879369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.879501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.879531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.913 [2024-11-19 09:29:55.879787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.913 [2024-11-19 09:29:55.879819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.913 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.880010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.880044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.880264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.880387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.880418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.880537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.880577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.880766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.880798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.880904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.880936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.881183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.881218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.881464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.881496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.881604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.881634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.881839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.881870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.882058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.882093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.882333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.882364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.882548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.882579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.882751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.882783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.882903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.882935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.883207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.883240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.883398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.883510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.883547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.883717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.883747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.883973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.884007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.884197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.884229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.884416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.884448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.884686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.884716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.884830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.884862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.885050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.885084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.885268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.885300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.885536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.885567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.885758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.885789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.885984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.886017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.886457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.886502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 Malloc0 00:27:54.914 [2024-11-19 09:29:55.886813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.886848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.887110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.887147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.914 [2024-11-19 09:29:55.887355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.887387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:54.914 [2024-11-19 09:29:55.887648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.887681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.887895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.914 [2024-11-19 09:29:55.887927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 [2024-11-19 09:29:55.888119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.914 [2024-11-19 09:29:55.888150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.914 qpair failed and we were unable to recover it. 00:27:54.914 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.915 [2024-11-19 09:29:55.888335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.888367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.888609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.888640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.888885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.888917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.889061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.889101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.889280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.889312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.889428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.889460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.889590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.889621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.889818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.889850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.889987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.890020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.890138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.890170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.890347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.890378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.890584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.890615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.890728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.890759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.890942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.890985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.891155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.891186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.891322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.891352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.891484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.891515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.891682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.891713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.891848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.891879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.892130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.892378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.892409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.892528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.892685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.892717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.892891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.892922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.893178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.893210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.893393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.893424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.893662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.893693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.893933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.893977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.894166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.894198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.894215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.915 [2024-11-19 09:29:55.894404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.894436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.894622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.894653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.894861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.894893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.895087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.895119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.895298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.895329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.895590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.895805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.895836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.895962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.915 [2024-11-19 09:29:55.895994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.915 qpair failed and we were unable to recover it. 00:27:54.915 [2024-11-19 09:29:55.896181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.896212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.896450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.896482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.896611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.896642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.896767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.896798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.896980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.897013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.897183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.897213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.897341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.897372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.897564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.897595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.897704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.897735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.897993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.898031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.898243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.898274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.898447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.898478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.898656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.898687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.898789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.898820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.899006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.899040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.899281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.899313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.899514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.899545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.899647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.899679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.899864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.899894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.900138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.900171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.900344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.900376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.900555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.900587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.900801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.900832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.901016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.901050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.901227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.901258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.901471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.901501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.901616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.901647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.901891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.901922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.902035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.902067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.902197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.902228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.902411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.902441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.902571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.902602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.902735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.902767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.916 [2024-11-19 09:29:55.902902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.902935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.903065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.903097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.903272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.916 [2024-11-19 09:29:55.903304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 [2024-11-19 09:29:55.903494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.903525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.916 [2024-11-19 09:29:55.903793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.916 [2024-11-19 09:29:55.903825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.916 qpair failed and we were unable to recover it. 00:27:54.916 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.916 [2024-11-19 09:29:55.904085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.904119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.904304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.904335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.904471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.904518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.904730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.904779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faea4000b90 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.905006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.905051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.905189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.905221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.905411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.905442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.905647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.905679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.905818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.905850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.906020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.906053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.906245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.906277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.906493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.906524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.906644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.906675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.906777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.906809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.907004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.907036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.907157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.907188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.907422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.907453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.907664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.907696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.907876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.907907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.908094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.908126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.908314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.908346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:54.917 [2024-11-19 09:29:55.908457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.917 [2024-11-19 09:29:55.908488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:54.917 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.908676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.908707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.908848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.908881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.909070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.909102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.909300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.909332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.909537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.909569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.909770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.909802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.909976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.910009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.910323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.910363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.910556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.910589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.910766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.910796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.178 [2024-11-19 09:29:55.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.911042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.911229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.911261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:55.178 [2024-11-19 09:29:55.911440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.911472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.911598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.911630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.911739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.911770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.911893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.911925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.912140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.912173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.912406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.912438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.912634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.912667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.912794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.912824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.912944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.912985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.913245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.913276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.913466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.913731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.913763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.913880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.913912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.914114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.914146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.914266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.914302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.914420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.914451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.178 [2024-11-19 09:29:55.914708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.178 [2024-11-19 09:29:55.914739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.178 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.914926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.914969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.915181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.915213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.915328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.915359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.915538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.915569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.915807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.915839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.915972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.916005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.916190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.916221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.916405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.916435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.916559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.916590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.916788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.916820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.917013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.917045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.917260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.917292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.917460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.917491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.917660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.917691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.917930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.917970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.918345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.918384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.918579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.918612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.918787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.918818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.179 [2024-11-19 09:29:55.919056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.919090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.179 [2024-11-19 09:29:55.919357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.919390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.919581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.919612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.179 [2024-11-19 09:29:55.919791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.919822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.179 [2024-11-19 09:29:55.920053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.920088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.920317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.920349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.920536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.920567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.920815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.920846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.921017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.921050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.921236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.921266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.921369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.921399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.921596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.921627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.921742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.921773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.921905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.921937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.922078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.179 [2024-11-19 09:29:55.922110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.179 qpair failed and we were unable to recover it. 00:27:55.179 [2024-11-19 09:29:55.922220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.180 [2024-11-19 09:29:55.922251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f6ba0 with addr=10.0.0.2, port=4420 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:55.922436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.180 [2024-11-19 09:29:55.924849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.924977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.925024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.925047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.925078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.925130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.180 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.180 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.180 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.180 [2024-11-19 09:29:55.934785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.934873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.934907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.934926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.934945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.934996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.180 09:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1272245 00:27:55.180 [2024-11-19 09:29:55.944857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.944972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.945001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.945012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.945023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.945049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:55.954817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.954879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.954895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.954904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.954911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.954929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:55.964791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.964848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.964863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.964870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.964877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.964892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:55.974843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.974902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.974917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.974925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.974931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.974946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:55.984837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.984893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.984909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.984916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.984922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.984937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:55.994875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:55.994960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:55.994975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:55.994982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:55.994988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:55.995003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:56.004959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:56.005015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:56.005032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:56.005040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:56.005046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:56.005061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:56.014953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:56.015018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:56.015032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:56.015039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:56.015046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:56.015060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:56.024968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:56.025073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:56.025087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:56.025095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:56.025101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.180 [2024-11-19 09:29:56.025116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.180 qpair failed and we were unable to recover it. 00:27:55.180 [2024-11-19 09:29:56.034997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.180 [2024-11-19 09:29:56.035103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.180 [2024-11-19 09:29:56.035118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.180 [2024-11-19 09:29:56.035125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.180 [2024-11-19 09:29:56.035132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.035147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.045028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.045082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.045097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.045104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.045113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.045129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.055050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.055105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.055120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.055127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.055133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.055148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.065052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.065104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.065119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.065127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.065133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.065148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.075104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.075178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.075193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.075199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.075206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.075220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.085116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.085176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.085191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.085198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.085204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.085218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.095138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.095193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.095208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.095215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.095221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.095236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.105225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.105285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.105300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.105307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.105313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.105327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.115187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.115247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.115262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.115269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.115275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.115290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.125233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.125288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.125302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.125309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.125315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.125329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.135309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.135362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.135379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.135386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.135393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.135407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.145217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.145272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.145287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.145293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.145300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.145314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.155250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.155307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.155322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.155329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.181 [2024-11-19 09:29:56.155335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.181 [2024-11-19 09:29:56.155350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.181 qpair failed and we were unable to recover it. 00:27:55.181 [2024-11-19 09:29:56.165317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.181 [2024-11-19 09:29:56.165411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.181 [2024-11-19 09:29:56.165427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.181 [2024-11-19 09:29:56.165434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.165439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.165454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.182 [2024-11-19 09:29:56.175300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.182 [2024-11-19 09:29:56.175353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.182 [2024-11-19 09:29:56.175367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.182 [2024-11-19 09:29:56.175374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.175384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.175399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.182 [2024-11-19 09:29:56.185338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.182 [2024-11-19 09:29:56.185391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.182 [2024-11-19 09:29:56.185405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.182 [2024-11-19 09:29:56.185412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.185418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.185433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.182 [2024-11-19 09:29:56.195359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.182 [2024-11-19 09:29:56.195413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.182 [2024-11-19 09:29:56.195427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.182 [2024-11-19 09:29:56.195433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.195439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.195454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.182 [2024-11-19 09:29:56.205479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.182 [2024-11-19 09:29:56.205533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.182 [2024-11-19 09:29:56.205547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.182 [2024-11-19 09:29:56.205553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.205559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.205574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.182 [2024-11-19 09:29:56.215464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.182 [2024-11-19 09:29:56.215512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.182 [2024-11-19 09:29:56.215527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.182 [2024-11-19 09:29:56.215534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.215540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.215555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.182 [2024-11-19 09:29:56.225565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.182 [2024-11-19 09:29:56.225680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.182 [2024-11-19 09:29:56.225696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.182 [2024-11-19 09:29:56.225703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.182 [2024-11-19 09:29:56.225709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.182 [2024-11-19 09:29:56.225724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.182 qpair failed and we were unable to recover it. 00:27:55.441 [2024-11-19 09:29:56.235532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.441 [2024-11-19 09:29:56.235589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.441 [2024-11-19 09:29:56.235604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.441 [2024-11-19 09:29:56.235610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.441 [2024-11-19 09:29:56.235617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.441 [2024-11-19 09:29:56.235631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.441 qpair failed and we were unable to recover it. 00:27:55.441 [2024-11-19 09:29:56.245513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.441 [2024-11-19 09:29:56.245570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.441 [2024-11-19 09:29:56.245584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.441 [2024-11-19 09:29:56.245591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.441 [2024-11-19 09:29:56.245597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.245611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.255650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.255717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.255732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.255738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.255744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.255759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.265593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.265670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.265688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.265695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.265701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.265715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.275679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.275734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.275748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.275754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.275760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.275775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.285744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.285801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.285816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.285823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.285829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.285844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.295768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.295825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.295840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.295847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.295853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.295867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.305695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.305747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.305761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.305767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.305777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.305792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.315793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.315850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.315865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.315872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.315878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.315892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.325813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.325873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.325888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.325894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.325900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.442 [2024-11-19 09:29:56.325915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.442 qpair failed and we were unable to recover it. 00:27:55.442 [2024-11-19 09:29:56.335796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.442 [2024-11-19 09:29:56.335853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.442 [2024-11-19 09:29:56.335867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.442 [2024-11-19 09:29:56.335874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.442 [2024-11-19 09:29:56.335880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.335894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.345879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.345935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.345952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.345960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.345966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.345980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.355928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.355998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.356013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.356020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.356026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.356040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.365956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.366016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.366030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.366038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.366046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.366061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.376016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.376074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.376088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.376095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.376101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.376117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.386001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.386083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.386099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.386106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.386113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.386129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.396070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.396125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.396144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.396152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.396158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.396173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.405996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.406058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.406072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.406079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.406085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.406100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.416098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.416155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.416170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.416177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.416183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.416198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.426065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.426119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.426134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.426140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.426146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.426161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.436171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.436233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.436247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.436254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.436263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.443 [2024-11-19 09:29:56.436278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.443 qpair failed and we were unable to recover it. 00:27:55.443 [2024-11-19 09:29:56.446125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.443 [2024-11-19 09:29:56.446180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.443 [2024-11-19 09:29:56.446195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.443 [2024-11-19 09:29:56.446201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.443 [2024-11-19 09:29:56.446207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.444 [2024-11-19 09:29:56.446221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-11-19 09:29:56.456142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.444 [2024-11-19 09:29:56.456195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.444 [2024-11-19 09:29:56.456209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.444 [2024-11-19 09:29:56.456216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.444 [2024-11-19 09:29:56.456222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.444 [2024-11-19 09:29:56.456236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-11-19 09:29:56.466172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.444 [2024-11-19 09:29:56.466222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.444 [2024-11-19 09:29:56.466237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.444 [2024-11-19 09:29:56.466244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.444 [2024-11-19 09:29:56.466250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.444 [2024-11-19 09:29:56.466266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-11-19 09:29:56.476269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.444 [2024-11-19 09:29:56.476327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.444 [2024-11-19 09:29:56.476342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.444 [2024-11-19 09:29:56.476349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.444 [2024-11-19 09:29:56.476355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.444 [2024-11-19 09:29:56.476370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.444 [2024-11-19 09:29:56.486286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.444 [2024-11-19 09:29:56.486340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.444 [2024-11-19 09:29:56.486355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.444 [2024-11-19 09:29:56.486361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.444 [2024-11-19 09:29:56.486367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.444 [2024-11-19 09:29:56.486382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.444 qpair failed and we were unable to recover it. 00:27:55.703 [2024-11-19 09:29:56.496329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.703 [2024-11-19 09:29:56.496387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.703 [2024-11-19 09:29:56.496401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.703 [2024-11-19 09:29:56.496408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.703 [2024-11-19 09:29:56.496414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.703 [2024-11-19 09:29:56.496429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.703 qpair failed and we were unable to recover it. 00:27:55.703 [2024-11-19 09:29:56.506311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.703 [2024-11-19 09:29:56.506368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.703 [2024-11-19 09:29:56.506382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.703 [2024-11-19 09:29:56.506388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.703 [2024-11-19 09:29:56.506394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.703 [2024-11-19 09:29:56.506408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.703 qpair failed and we were unable to recover it. 00:27:55.703 [2024-11-19 09:29:56.516322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.703 [2024-11-19 09:29:56.516381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.703 [2024-11-19 09:29:56.516395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.703 [2024-11-19 09:29:56.516402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.703 [2024-11-19 09:29:56.516408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.703 [2024-11-19 09:29:56.516422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.703 qpair failed and we were unable to recover it. 00:27:55.703 [2024-11-19 09:29:56.526392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.703 [2024-11-19 09:29:56.526459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.703 [2024-11-19 09:29:56.526477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.703 [2024-11-19 09:29:56.526484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.703 [2024-11-19 09:29:56.526490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.703 [2024-11-19 09:29:56.526505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.703 qpair failed and we were unable to recover it. 00:27:55.703 [2024-11-19 09:29:56.536425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.703 [2024-11-19 09:29:56.536482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.703 [2024-11-19 09:29:56.536499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.703 [2024-11-19 09:29:56.536507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.703 [2024-11-19 09:29:56.536513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.703 [2024-11-19 09:29:56.536529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.703 qpair failed and we were unable to recover it. 00:27:55.703 [2024-11-19 09:29:56.546443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.703 [2024-11-19 09:29:56.546497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.703 [2024-11-19 09:29:56.546514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.546521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.546527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.546542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.556497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.556553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.556567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.556574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.556580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.556594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.566509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.566562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.566577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.566583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.566593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.566608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.576559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.576614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.576628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.576635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.576641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.576656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.586607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.586662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.586676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.586683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.586690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.586704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.596603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.596662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.596678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.596685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.596692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.596706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.606638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.606691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.606705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.606711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.606717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.606732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.616672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.616725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.616740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.616746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.616752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.616767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.626681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.626734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.626748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.626755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.626761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.626775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.636727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.636781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.636795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.636802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.636808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.636822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.646682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.646737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.646752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.646758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.646764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.646779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.656810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.656865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.656882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.656889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.656895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.656910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.666806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.666862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.666877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.704 [2024-11-19 09:29:56.666883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.704 [2024-11-19 09:29:56.666889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.704 [2024-11-19 09:29:56.666904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.704 qpair failed and we were unable to recover it. 00:27:55.704 [2024-11-19 09:29:56.676844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.704 [2024-11-19 09:29:56.676901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.704 [2024-11-19 09:29:56.676915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.676922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.676928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.676943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.686794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.686852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.686867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.686874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.686880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.686894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.696901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.696958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.696973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.696979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.696989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.697004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.706925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.706979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.706993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.707000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.707006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.707021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.717055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.717126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.717140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.717147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.717152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.717167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.727054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.727115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.727131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.727138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.727144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.727159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.737056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.737109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.737124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.737131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.737137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.737152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.747083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.747136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.747151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.747157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.747164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.747177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.705 [2024-11-19 09:29:56.757112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.705 [2024-11-19 09:29:56.757169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.705 [2024-11-19 09:29:56.757183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.705 [2024-11-19 09:29:56.757190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.705 [2024-11-19 09:29:56.757196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.705 [2024-11-19 09:29:56.757211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.705 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-19 09:29:56.767108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.964 [2024-11-19 09:29:56.767164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.964 [2024-11-19 09:29:56.767178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.964 [2024-11-19 09:29:56.767184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.964 [2024-11-19 09:29:56.767190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.964 [2024-11-19 09:29:56.767204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-19 09:29:56.777136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.964 [2024-11-19 09:29:56.777195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.964 [2024-11-19 09:29:56.777213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.964 [2024-11-19 09:29:56.777220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.964 [2024-11-19 09:29:56.777226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.964 [2024-11-19 09:29:56.777241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-19 09:29:56.787182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.964 [2024-11-19 09:29:56.787246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.964 [2024-11-19 09:29:56.787265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.964 [2024-11-19 09:29:56.787271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.964 [2024-11-19 09:29:56.787277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.964 [2024-11-19 09:29:56.787292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.964 qpair failed and we were unable to recover it. 00:27:55.964 [2024-11-19 09:29:56.797200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.964 [2024-11-19 09:29:56.797259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.964 [2024-11-19 09:29:56.797273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.964 [2024-11-19 09:29:56.797279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.964 [2024-11-19 09:29:56.797285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.797300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.807229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.807287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.807301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.807308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.807315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.807330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.817253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.817307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.817321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.817328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.817334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.817348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.827275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.827327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.827342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.827348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.827358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.827372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.837317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.837373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.837388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.837394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.837401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.837415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.847325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.847377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.847392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.847399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.847405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.847419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.857354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.857426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.857440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.857448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.857454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.857468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.867410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.867478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.867492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.867499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.867505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.867519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.877475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.877532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.877546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.877553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.877559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.877574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.887376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.887430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.887446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.887452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.887459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.887473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.897499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.897554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.897569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.897576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.897582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.897597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.907498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.907551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.907565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.907571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.907577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.907591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.917537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.917598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.965 [2024-11-19 09:29:56.917616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.965 [2024-11-19 09:29:56.917623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.965 [2024-11-19 09:29:56.917629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.965 [2024-11-19 09:29:56.917643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.965 qpair failed and we were unable to recover it. 00:27:55.965 [2024-11-19 09:29:56.927561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.965 [2024-11-19 09:29:56.927615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.927629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.927636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.927643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.927657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.937631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.937686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.937700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.937707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.937713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.937728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.947616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.947673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.947687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.947694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.947700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.947715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.957640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.957697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.957711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.957718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.957727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.957742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.967691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.967768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.967783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.967790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.967796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.967810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.977726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.977777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.977791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.977798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.977805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.977819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.987769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.987827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.987842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.987849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.987855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.987870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:56.997775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:56.997828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:56.997842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:56.997848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:56.997855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:56.997869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:57.007778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:57.007834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:57.007849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:57.007856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:57.007862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:57.007876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:55.966 [2024-11-19 09:29:57.017864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:55.966 [2024-11-19 09:29:57.017927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:55.966 [2024-11-19 09:29:57.017943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:55.966 [2024-11-19 09:29:57.017955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:55.966 [2024-11-19 09:29:57.017961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:55.966 [2024-11-19 09:29:57.017976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:55.966 qpair failed and we were unable to recover it. 00:27:56.226 [2024-11-19 09:29:57.027906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.226 [2024-11-19 09:29:57.027965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.226 [2024-11-19 09:29:57.027979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.226 [2024-11-19 09:29:57.027986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.226 [2024-11-19 09:29:57.027992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.226 [2024-11-19 09:29:57.028006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-11-19 09:29:57.037900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.226 [2024-11-19 09:29:57.037964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.226 [2024-11-19 09:29:57.037978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.226 [2024-11-19 09:29:57.037985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.226 [2024-11-19 09:29:57.037990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.226 [2024-11-19 09:29:57.038005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-11-19 09:29:57.047918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.226 [2024-11-19 09:29:57.047979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.226 [2024-11-19 09:29:57.047996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.226 [2024-11-19 09:29:57.048003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.226 [2024-11-19 09:29:57.048009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.226 [2024-11-19 09:29:57.048024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.226 qpair failed and we were unable to recover it. 00:27:56.226 [2024-11-19 09:29:57.057945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.226 [2024-11-19 09:29:57.058007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.226 [2024-11-19 09:29:57.058021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.226 [2024-11-19 09:29:57.058028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.226 [2024-11-19 09:29:57.058034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.226 [2024-11-19 09:29:57.058049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.067964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.068020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.068034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.068041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.068047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.068062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.078011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.078071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.078085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.078092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.078099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.078114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.088035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.088089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.088103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.088110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.088119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.088134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.098073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.098145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.098160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.098167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.098173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.098188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.108089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.108145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.108159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.108166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.108172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.108186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.118117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.118194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.118209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.118216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.118222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.118236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.128145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.128200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.128214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.128221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.128227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.128241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.138172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.138227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.138242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.138248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.138254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.138269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.148206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.148258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.148272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.148278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.148284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.148299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.158237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.158293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.158308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.158315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.158321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.158335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.168266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.168320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.168334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.168340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.168347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.168361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.178278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.178334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.178357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.178364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.178370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.178385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.227 [2024-11-19 09:29:57.188338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.227 [2024-11-19 09:29:57.188390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.227 [2024-11-19 09:29:57.188404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.227 [2024-11-19 09:29:57.188411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.227 [2024-11-19 09:29:57.188418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.227 [2024-11-19 09:29:57.188432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.227 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.198360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.198436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.198450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.198457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.198463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.198478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.208371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.208426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.208440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.208447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.208453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.208468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.218400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.218459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.218473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.218480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.218489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.218504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.228429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.228483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.228497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.228504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.228510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.228525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.238521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.238575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.238589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.238596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.238602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.238616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.248477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.248533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.248547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.248553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.248560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.248574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.258508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.258565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.258578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.258585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.258591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.258606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.268530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.268582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.268596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.268603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.268610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.268625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.228 [2024-11-19 09:29:57.278603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.228 [2024-11-19 09:29:57.278665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.228 [2024-11-19 09:29:57.278678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.228 [2024-11-19 09:29:57.278685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.228 [2024-11-19 09:29:57.278692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.228 [2024-11-19 09:29:57.278707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.228 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.288612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.288688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.288702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.288709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.288715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.288730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.298631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.298688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.298703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.298710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.298716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.298731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.308670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.308741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.308759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.308766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.308772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.308786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.318743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.318807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.318822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.318828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.318834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.318849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.328724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.328780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.328794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.328801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.328807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.328821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.338732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.338787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.338802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.338809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.338815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.338829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.348775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.348829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.348843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.348850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.348859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.348874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.358794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.358853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.358868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.358875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.358881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.358895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.368834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.368892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.368906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.368913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.368919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.368934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.378856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.378909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.378925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.378932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.378937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.378956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.388888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.388949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.388966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.388973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.388980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.388996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.398917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.488 [2024-11-19 09:29:57.398978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.488 [2024-11-19 09:29:57.398993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.488 [2024-11-19 09:29:57.399000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.488 [2024-11-19 09:29:57.399005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.488 [2024-11-19 09:29:57.399020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.488 qpair failed and we were unable to recover it. 00:27:56.488 [2024-11-19 09:29:57.408941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.409004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.409018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.409025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.409031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.409046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.418973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.419026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.419040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.419046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.419053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.419068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.428990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.429042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.429058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.429065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.429071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.429085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.439105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.439188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.439206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.439213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.439219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.439234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.449049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.449109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.449123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.449130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.449136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.449151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.459123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.459179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.459194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.459200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.459207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.459221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.469113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.469167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.469180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.469188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.469194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.469209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.479146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.479200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.479214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.479224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.479230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.479245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.489173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.489241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.489257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.489264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.489270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.489285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.499190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.499246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.499262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.499269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.499275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.499290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.509237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.509293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.509306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.509313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.509319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.509335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.519189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.519243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.519257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.519264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.519270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.519284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.529283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.529353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.489 [2024-11-19 09:29:57.529370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.489 [2024-11-19 09:29:57.529378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.489 [2024-11-19 09:29:57.529384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.489 [2024-11-19 09:29:57.529401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.489 qpair failed and we were unable to recover it. 00:27:56.489 [2024-11-19 09:29:57.539329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.489 [2024-11-19 09:29:57.539426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.490 [2024-11-19 09:29:57.539441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.490 [2024-11-19 09:29:57.539447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.490 [2024-11-19 09:29:57.539453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.490 [2024-11-19 09:29:57.539469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.490 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.549292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.549353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.549368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.549375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.549381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.549396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.559376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.559457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.559471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.559478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.559484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.559498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.569397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.569454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.569472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.569479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.569485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.569499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.579367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.579425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.579439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.579446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.579452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.579467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.589445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.589522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.589537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.589544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.589550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.589565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.599426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.599482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.599497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.599504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.599510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.599525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.609459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.609514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.609527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.609537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.609543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.609557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.619483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.619537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.619551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.749 [2024-11-19 09:29:57.619558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.749 [2024-11-19 09:29:57.619564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.749 [2024-11-19 09:29:57.619578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.749 qpair failed and we were unable to recover it. 00:27:56.749 [2024-11-19 09:29:57.629568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.749 [2024-11-19 09:29:57.629621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.749 [2024-11-19 09:29:57.629636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.629642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.629648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.629663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.639593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.639653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.639669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.639676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.639682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.639696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.649658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.649723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.649737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.649744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.649750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.649765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.659599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.659655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.659670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.659677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.659683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.659698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.669728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.669784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.669798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.669806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.669812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.669826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.679696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.679775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.679791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.679798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.679805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.679819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.689707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.689789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.689803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.689810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.689816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.689831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.699795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.699849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.699866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.699873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.699879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.699894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.709865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.709915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.709930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.709937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.709943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.709962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.719903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.719983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.719998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.720004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.720010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.720025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.729858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.729925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.729939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.729950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.729957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.729971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.739878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.739966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.739981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.739991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.739997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.740011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.749858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.750 [2024-11-19 09:29:57.749912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.750 [2024-11-19 09:29:57.749928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.750 [2024-11-19 09:29:57.749935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.750 [2024-11-19 09:29:57.749941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.750 [2024-11-19 09:29:57.749961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.750 qpair failed and we were unable to recover it. 00:27:56.750 [2024-11-19 09:29:57.759937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.751 [2024-11-19 09:29:57.759999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.751 [2024-11-19 09:29:57.760013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.751 [2024-11-19 09:29:57.760020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.751 [2024-11-19 09:29:57.760026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.751 [2024-11-19 09:29:57.760041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-11-19 09:29:57.769945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.751 [2024-11-19 09:29:57.770002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.751 [2024-11-19 09:29:57.770016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.751 [2024-11-19 09:29:57.770023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.751 [2024-11-19 09:29:57.770029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.751 [2024-11-19 09:29:57.770044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-11-19 09:29:57.779959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.751 [2024-11-19 09:29:57.780060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.751 [2024-11-19 09:29:57.780074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.751 [2024-11-19 09:29:57.780081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.751 [2024-11-19 09:29:57.780087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.751 [2024-11-19 09:29:57.780102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-11-19 09:29:57.790032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.751 [2024-11-19 09:29:57.790088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.751 [2024-11-19 09:29:57.790102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.751 [2024-11-19 09:29:57.790109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.751 [2024-11-19 09:29:57.790115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.751 [2024-11-19 09:29:57.790130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.751 qpair failed and we were unable to recover it. 00:27:56.751 [2024-11-19 09:29:57.800079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:56.751 [2024-11-19 09:29:57.800135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:56.751 [2024-11-19 09:29:57.800150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:56.751 [2024-11-19 09:29:57.800157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:56.751 [2024-11-19 09:29:57.800163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:56.751 [2024-11-19 09:29:57.800178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.751 qpair failed and we were unable to recover it. 00:27:57.010 [2024-11-19 09:29:57.810043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.010 [2024-11-19 09:29:57.810103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.810118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.810124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.810131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.810146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.820075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.820156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.820171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.820177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.820183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.820198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.830167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.830217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.830235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.830241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.830247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.830261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.840204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.840259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.840273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.840279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.840285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.840300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.850265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.850327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.850341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.850348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.850354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.850369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.860238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.860294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.860308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.860315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.860321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.860335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.870261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.870315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.870329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.870340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.870346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.870361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.880304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.880361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.880375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.880382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.880388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.880402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.890307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.890364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.890379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.890386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.890392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.890407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.900401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.900463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.900478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.900484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.900491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.900506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.910336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.910386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.910401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.910407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.910414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.910428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.920370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.920441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.920455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.011 [2024-11-19 09:29:57.920461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.011 [2024-11-19 09:29:57.920467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.011 [2024-11-19 09:29:57.920482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.011 qpair failed and we were unable to recover it. 00:27:57.011 [2024-11-19 09:29:57.930417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.011 [2024-11-19 09:29:57.930498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.011 [2024-11-19 09:29:57.930511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.930518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.930524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.930539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:57.940383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:57.940440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:57.940455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.940462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.940467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.940482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:57.950430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:57.950486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:57.950500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.950507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.950513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.950527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:57.960569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:57.960632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:57.960646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.960653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.960659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.960675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:57.970593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:57.970651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:57.970665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.970672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.970677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.970692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:57.980571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:57.980624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:57.980639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.980646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.980652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.980666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:57.990599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:57.990681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:57.990696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:57.990703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:57.990709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:57.990723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:58.000632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:58.000685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:58.000699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:58.000710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:58.000717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:58.000731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:58.010632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:58.010686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:58.010700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:58.010707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:58.010713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:58.010727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:58.020675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:58.020744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:58.020759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:58.020766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:58.020772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:58.020787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:58.030716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:58.030767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:58.030781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:58.030787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:58.030794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:58.030807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:58.040807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:58.040909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:58.040924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:58.040930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:58.040936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:58.040954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.012 qpair failed and we were unable to recover it. 00:27:57.012 [2024-11-19 09:29:58.050784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.012 [2024-11-19 09:29:58.050844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.012 [2024-11-19 09:29:58.050858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.012 [2024-11-19 09:29:58.050865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.012 [2024-11-19 09:29:58.050870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.012 [2024-11-19 09:29:58.050885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.013 qpair failed and we were unable to recover it. 00:27:57.013 [2024-11-19 09:29:58.060810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.013 [2024-11-19 09:29:58.060869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.013 [2024-11-19 09:29:58.060884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.013 [2024-11-19 09:29:58.060891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.013 [2024-11-19 09:29:58.060898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.013 [2024-11-19 09:29:58.060912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.013 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.070774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.070831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.070846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.070852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.070859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.070874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.080877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.080931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.080945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.080957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.080963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.080978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.090899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.090964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.090980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.090986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.090992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.091007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.100907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.100960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.100975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.100982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.100988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.101002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.110986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.111041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.111055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.111062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.111068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.111083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.120991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.121050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.121063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.121070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.121077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.121091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.130998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.131058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.131073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.131083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.131089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.131104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.141040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.141104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.141119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.141125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.141131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.141146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.151052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.151132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.151146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.151153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.151159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.151174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.161108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.161165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.161179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.161187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.161193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.273 [2024-11-19 09:29:58.161208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.273 qpair failed and we were unable to recover it. 00:27:57.273 [2024-11-19 09:29:58.171133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.273 [2024-11-19 09:29:58.171187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.273 [2024-11-19 09:29:58.171201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.273 [2024-11-19 09:29:58.171207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.273 [2024-11-19 09:29:58.171214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.171232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.181164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.181218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.181233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.181239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.181246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.181261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.191186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.191241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.191256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.191263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.191269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.191284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.201212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.201268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.201282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.201289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.201295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.201310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.211254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.211304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.211319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.211326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.211332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.211346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.221259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.221337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.221351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.221358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.221364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.221379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.231292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.231341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.231356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.231362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.231368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.231383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.241304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.241389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.241403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.241410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.241415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.241430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.251355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.251412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.251426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.251433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.251439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.251453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.261367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.261425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.261440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.261450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.261456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.261470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.271399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.271454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.271468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.271475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.271481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.271495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.281439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.281493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.281508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.281515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.281522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.281536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.291452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.291509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.291524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.291532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.274 [2024-11-19 09:29:58.291538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.274 [2024-11-19 09:29:58.291553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.274 qpair failed and we were unable to recover it. 00:27:57.274 [2024-11-19 09:29:58.301474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.274 [2024-11-19 09:29:58.301529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.274 [2024-11-19 09:29:58.301543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.274 [2024-11-19 09:29:58.301550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.275 [2024-11-19 09:29:58.301556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.275 [2024-11-19 09:29:58.301574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.275 qpair failed and we were unable to recover it. 00:27:57.275 [2024-11-19 09:29:58.311501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.275 [2024-11-19 09:29:58.311556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.275 [2024-11-19 09:29:58.311570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.275 [2024-11-19 09:29:58.311577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.275 [2024-11-19 09:29:58.311583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.275 [2024-11-19 09:29:58.311598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.275 qpair failed and we were unable to recover it. 00:27:57.275 [2024-11-19 09:29:58.321605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.275 [2024-11-19 09:29:58.321702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.275 [2024-11-19 09:29:58.321718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.275 [2024-11-19 09:29:58.321725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.275 [2024-11-19 09:29:58.321730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.275 [2024-11-19 09:29:58.321745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.275 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.331577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.331630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.331644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.331651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.331657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.331671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.341606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.341659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.341674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.341680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.341687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.341701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.351635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.351687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.351702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.351708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.351715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.351729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.361681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.361742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.361757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.361765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.361770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.361785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.371690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.371762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.371776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.371783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.371789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.371804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.381672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.381750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.381764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.381771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.381777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.381792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.391742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.391793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.391808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.391818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.391824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.391839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.535 qpair failed and we were unable to recover it. 00:27:57.535 [2024-11-19 09:29:58.401826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.535 [2024-11-19 09:29:58.401882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.535 [2024-11-19 09:29:58.401897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.535 [2024-11-19 09:29:58.401904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.535 [2024-11-19 09:29:58.401910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.535 [2024-11-19 09:29:58.401924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.411803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.411856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.411870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.411877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.411884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.411898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.421829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.421895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.421910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.421916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.421922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.421937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.431860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.431911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.431926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.431932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.431938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.431960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.441908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.441986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.442000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.442007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.442013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.442029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.451984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.452037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.452051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.452058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.452064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.452079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.461953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.462038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.462060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.462068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.462074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.462091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.471991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.472044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.472058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.472065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.472071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.472085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.482013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.482094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.482108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.482114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.482120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.482135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.492043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.492100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.492114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.492121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.492127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.492142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.502065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.502113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.502127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.502134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.502140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.502156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.512031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.512086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.512101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.512108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.512115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.512130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.522131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.522188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.522203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.522213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.522220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.522235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.536 [2024-11-19 09:29:58.532145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.536 [2024-11-19 09:29:58.532203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.536 [2024-11-19 09:29:58.532220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.536 [2024-11-19 09:29:58.532227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.536 [2024-11-19 09:29:58.532234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.536 [2024-11-19 09:29:58.532250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.536 qpair failed and we were unable to recover it. 00:27:57.537 [2024-11-19 09:29:58.542162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.537 [2024-11-19 09:29:58.542216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.537 [2024-11-19 09:29:58.542230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.537 [2024-11-19 09:29:58.542237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.537 [2024-11-19 09:29:58.542243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.537 [2024-11-19 09:29:58.542258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.537 qpair failed and we were unable to recover it. 00:27:57.537 [2024-11-19 09:29:58.552204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.537 [2024-11-19 09:29:58.552258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.537 [2024-11-19 09:29:58.552273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.537 [2024-11-19 09:29:58.552280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.537 [2024-11-19 09:29:58.552287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.537 [2024-11-19 09:29:58.552301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.537 qpair failed and we were unable to recover it. 00:27:57.537 [2024-11-19 09:29:58.562247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.537 [2024-11-19 09:29:58.562301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.537 [2024-11-19 09:29:58.562315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.537 [2024-11-19 09:29:58.562322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.537 [2024-11-19 09:29:58.562329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.537 [2024-11-19 09:29:58.562347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.537 qpair failed and we were unable to recover it. 00:27:57.537 [2024-11-19 09:29:58.572268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.537 [2024-11-19 09:29:58.572321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.537 [2024-11-19 09:29:58.572335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.537 [2024-11-19 09:29:58.572342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.537 [2024-11-19 09:29:58.572348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.537 [2024-11-19 09:29:58.572363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.537 qpair failed and we were unable to recover it. 00:27:57.537 [2024-11-19 09:29:58.582224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.537 [2024-11-19 09:29:58.582282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.537 [2024-11-19 09:29:58.582296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.537 [2024-11-19 09:29:58.582303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.537 [2024-11-19 09:29:58.582309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.537 [2024-11-19 09:29:58.582324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.537 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.592322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.592390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.592405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.592411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.592418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.592432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.602401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.602476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.602492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.602499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.602505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.602520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.612371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.612433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.612448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.612455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.612461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.612475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.622407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.622459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.622474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.622481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.622487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.622502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.632429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.632481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.632495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.632502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.632508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.632523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.642466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.642523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.642537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.642544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.642550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.642565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.652488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.652543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.652557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.652567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.797 [2024-11-19 09:29:58.652573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.797 [2024-11-19 09:29:58.652588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.797 qpair failed and we were unable to recover it. 00:27:57.797 [2024-11-19 09:29:58.662510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.797 [2024-11-19 09:29:58.662562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.797 [2024-11-19 09:29:58.662576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.797 [2024-11-19 09:29:58.662583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.662589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.662603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.672540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.672592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.672605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.672612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.672618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.672632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.682563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.682621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.682636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.682643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.682649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.682664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.692612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.692664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.692678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.692685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.692691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.692712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.702604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.702661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.702675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.702682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.702688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.702702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.712690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.712780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.712795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.712802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.712808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.712824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.722702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.722754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.722768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.722774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.722780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.722795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.732719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.732775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.732790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.732796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.732802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.732817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.742748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.742803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.742817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.742824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.742831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.742845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.752748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.752799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.752814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.752821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.752826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.752841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.762829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.762900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.762915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.762921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.762927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.762943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.772856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.772912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.772926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.772933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.772939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.798 [2024-11-19 09:29:58.772958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.798 qpair failed and we were unable to recover it. 00:27:57.798 [2024-11-19 09:29:58.782864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.798 [2024-11-19 09:29:58.782918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.798 [2024-11-19 09:29:58.782933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.798 [2024-11-19 09:29:58.782943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.798 [2024-11-19 09:29:58.782954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.782970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-11-19 09:29:58.792903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.799 [2024-11-19 09:29:58.792957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.799 [2024-11-19 09:29:58.792972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.799 [2024-11-19 09:29:58.792979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.799 [2024-11-19 09:29:58.792985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.793000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-11-19 09:29:58.802929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.799 [2024-11-19 09:29:58.802993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.799 [2024-11-19 09:29:58.803008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.799 [2024-11-19 09:29:58.803015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.799 [2024-11-19 09:29:58.803022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.803036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-11-19 09:29:58.812984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.799 [2024-11-19 09:29:58.813045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.799 [2024-11-19 09:29:58.813060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.799 [2024-11-19 09:29:58.813066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.799 [2024-11-19 09:29:58.813072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.813087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-11-19 09:29:58.822979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.799 [2024-11-19 09:29:58.823034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.799 [2024-11-19 09:29:58.823049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.799 [2024-11-19 09:29:58.823055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.799 [2024-11-19 09:29:58.823061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.823079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-11-19 09:29:58.833051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.799 [2024-11-19 09:29:58.833100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.799 [2024-11-19 09:29:58.833115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.799 [2024-11-19 09:29:58.833121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.799 [2024-11-19 09:29:58.833127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.833141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:57.799 [2024-11-19 09:29:58.843044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:57.799 [2024-11-19 09:29:58.843102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:57.799 [2024-11-19 09:29:58.843116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:57.799 [2024-11-19 09:29:58.843123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:57.799 [2024-11-19 09:29:58.843129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:57.799 [2024-11-19 09:29:58.843144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:57.799 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.853087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.853141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.853155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.853162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.853168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.853183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.863102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.863157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.863171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.863178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.863185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.863199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.873126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.873186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.873201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.873208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.873214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.873228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.883181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.883247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.883262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.883269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.883275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.883290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.893196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.893247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.893261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.893268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.893275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.893289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.903219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.903296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.903311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.903318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.903324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.903338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.913243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.913298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.913312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.913321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.913328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.913342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.923276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.923342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.923356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.923363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.923369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.923383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.933310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.933368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.933382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.933389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.933395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.933410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.943330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.943387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.943402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.943409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.943415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.943429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.953394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.953456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.059 [2024-11-19 09:29:58.953471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.059 [2024-11-19 09:29:58.953478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.059 [2024-11-19 09:29:58.953484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.059 [2024-11-19 09:29:58.953501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.059 qpair failed and we were unable to recover it. 00:27:58.059 [2024-11-19 09:29:58.963413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.059 [2024-11-19 09:29:58.963474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:58.963489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:58.963496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:58.963502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:58.963517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:58.973422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:58.973486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:58.973501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:58.973509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:58.973516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:58.973530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:58.983437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:58.983496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:58.983511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:58.983518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:58.983524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:58.983539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:58.993477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:58.993528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:58.993544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:58.993551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:58.993557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:58.993572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.003500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.003559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.003574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.003580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.003587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.003601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.013584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.013649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.013664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.013671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.013677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.013692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.023554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.023611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.023626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.023632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.023639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.023653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.033601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.033658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.033672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.033679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.033685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.033700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.043634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.043690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.043704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.043714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.043720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.043735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.053712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.053767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.053781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.053787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.053794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.053808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.063696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.063755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.063770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.063777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.063783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.063798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.073690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.073746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.073761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.073768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.073774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.073788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.083759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.060 [2024-11-19 09:29:59.083824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.060 [2024-11-19 09:29:59.083840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.060 [2024-11-19 09:29:59.083847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.060 [2024-11-19 09:29:59.083853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.060 [2024-11-19 09:29:59.083871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.060 qpair failed and we were unable to recover it. 00:27:58.060 [2024-11-19 09:29:59.093767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.061 [2024-11-19 09:29:59.093820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.061 [2024-11-19 09:29:59.093835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.061 [2024-11-19 09:29:59.093841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.061 [2024-11-19 09:29:59.093848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.061 [2024-11-19 09:29:59.093862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.061 [2024-11-19 09:29:59.103814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.061 [2024-11-19 09:29:59.103871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.061 [2024-11-19 09:29:59.103886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.061 [2024-11-19 09:29:59.103893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.061 [2024-11-19 09:29:59.103899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.061 [2024-11-19 09:29:59.103914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.061 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.113775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.113832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.113846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.113853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.113860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.113874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.123879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.123938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.123958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.123965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.123971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.123985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.133925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.134007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.134021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.134028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.134034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.134049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.143920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.143979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.143994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.144002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.144008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.144023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.153950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.154005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.154019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.154026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.154032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.154047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.164029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.164130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.164145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.164153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.164159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.164174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.174016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.174076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.174091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.174101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.174107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.174122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.184032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.184085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.184100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.184107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.184114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.184128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.194044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.194096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.194110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.194117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.194123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.194138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.204115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.204177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.204192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.204199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.204205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.204219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.214121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.214180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.214195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.214202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.214208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.214226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.224079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.224134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.224149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.224155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.224161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.224175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.234179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.234231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.234245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.234251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.234257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.234271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.244225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.244285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.244299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.244306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.244312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.244327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.254255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.254313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.254328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.254335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.254341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.254356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.264262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.264316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.264331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.264338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.264344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.264359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.274246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.274303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.274318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.274324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.274330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.274345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.284329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.284392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.284407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.284414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.284420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.284436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.294290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.294348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.294362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.294369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.319 [2024-11-19 09:29:59.294375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.319 [2024-11-19 09:29:59.294389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.319 qpair failed and we were unable to recover it. 00:27:58.319 [2024-11-19 09:29:59.304318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.319 [2024-11-19 09:29:59.304373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.319 [2024-11-19 09:29:59.304391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.319 [2024-11-19 09:29:59.304398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.304404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.304418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.320 [2024-11-19 09:29:59.314445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.320 [2024-11-19 09:29:59.314499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.320 [2024-11-19 09:29:59.314513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.320 [2024-11-19 09:29:59.314520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.314526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.314541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.320 [2024-11-19 09:29:59.324465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.320 [2024-11-19 09:29:59.324521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.320 [2024-11-19 09:29:59.324535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.320 [2024-11-19 09:29:59.324542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.324548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.324562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.320 [2024-11-19 09:29:59.334419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.320 [2024-11-19 09:29:59.334498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.320 [2024-11-19 09:29:59.334512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.320 [2024-11-19 09:29:59.334519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.334525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.334539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.320 [2024-11-19 09:29:59.344494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.320 [2024-11-19 09:29:59.344551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.320 [2024-11-19 09:29:59.344565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.320 [2024-11-19 09:29:59.344572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.344578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.344596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.320 [2024-11-19 09:29:59.354471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.320 [2024-11-19 09:29:59.354524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.320 [2024-11-19 09:29:59.354539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.320 [2024-11-19 09:29:59.354545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.354551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.354565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.320 [2024-11-19 09:29:59.364542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.320 [2024-11-19 09:29:59.364601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.320 [2024-11-19 09:29:59.364616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.320 [2024-11-19 09:29:59.364623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.320 [2024-11-19 09:29:59.364629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.320 [2024-11-19 09:29:59.364644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.320 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.374544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.374603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.374618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.374624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.374630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.374645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.384599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.384662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.384677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.384684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.384690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.384705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.394716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.394777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.394792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.394799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.394805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.394819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.404706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.404771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.404786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.404793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.404799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.404813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.414706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.414761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.414776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.414782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.414788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.414803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.424729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.424797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.424811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.424818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.424824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.424838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.434746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.434801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.434819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.434826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.434832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.434847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.444779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.444836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.444850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.444857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.444863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.444878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.454849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.454907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.454921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.454929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.454935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.454953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.464835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.464899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.464913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.464920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.464926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.464941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.474856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.474904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.474918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.474925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.474931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.474953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.484901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.484961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.484977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.484984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.484989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.580 [2024-11-19 09:29:59.485004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.580 qpair failed and we were unable to recover it. 00:27:58.580 [2024-11-19 09:29:59.494931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.580 [2024-11-19 09:29:59.494985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.580 [2024-11-19 09:29:59.494999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.580 [2024-11-19 09:29:59.495006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.580 [2024-11-19 09:29:59.495012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.495027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.504962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.505025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.505040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.505048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.505055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.505071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.514978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.515040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.515055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.515062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.515068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.515083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.525019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.525078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.525092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.525099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.525105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.525121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.535046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.535103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.535120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.535127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.535133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.535149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.545064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.545129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.545144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.545151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.545157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.545172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.555100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.555150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.555166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.555173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.555179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.555194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.565135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.565190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.565208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.565214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.565220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.565235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.575206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.575263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.575278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.575285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.575291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.575306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.585228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.585286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.585301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.585308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.585314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.585330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.595228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.595285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.595300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.595307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.595313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.595327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.581 [2024-11-19 09:29:59.605249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.581 [2024-11-19 09:29:59.605321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.581 [2024-11-19 09:29:59.605338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.581 [2024-11-19 09:29:59.605345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.581 [2024-11-19 09:29:59.605351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.581 [2024-11-19 09:29:59.605371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.581 qpair failed and we were unable to recover it. 00:27:58.582 [2024-11-19 09:29:59.615269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.582 [2024-11-19 09:29:59.615328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.582 [2024-11-19 09:29:59.615342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.582 [2024-11-19 09:29:59.615350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.582 [2024-11-19 09:29:59.615356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.582 [2024-11-19 09:29:59.615371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.582 qpair failed and we were unable to recover it. 00:27:58.582 [2024-11-19 09:29:59.625351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.582 [2024-11-19 09:29:59.625404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.582 [2024-11-19 09:29:59.625418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.582 [2024-11-19 09:29:59.625424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.582 [2024-11-19 09:29:59.625430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.582 [2024-11-19 09:29:59.625445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.582 qpair failed and we were unable to recover it. 00:27:58.841 [2024-11-19 09:29:59.635324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.841 [2024-11-19 09:29:59.635381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.841 [2024-11-19 09:29:59.635396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.841 [2024-11-19 09:29:59.635402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.841 [2024-11-19 09:29:59.635408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.841 [2024-11-19 09:29:59.635423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.841 qpair failed and we were unable to recover it. 00:27:58.841 [2024-11-19 09:29:59.645386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.841 [2024-11-19 09:29:59.645448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.841 [2024-11-19 09:29:59.645463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.841 [2024-11-19 09:29:59.645470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.841 [2024-11-19 09:29:59.645476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.841 [2024-11-19 09:29:59.645493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.841 qpair failed and we were unable to recover it. 00:27:58.841 [2024-11-19 09:29:59.655432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.841 [2024-11-19 09:29:59.655485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.841 [2024-11-19 09:29:59.655499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.841 [2024-11-19 09:29:59.655506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.841 [2024-11-19 09:29:59.655512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.841 [2024-11-19 09:29:59.655527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.841 qpair failed and we were unable to recover it. 00:27:58.841 [2024-11-19 09:29:59.665418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.841 [2024-11-19 09:29:59.665467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.841 [2024-11-19 09:29:59.665481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.841 [2024-11-19 09:29:59.665488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.841 [2024-11-19 09:29:59.665494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.841 [2024-11-19 09:29:59.665508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.841 qpair failed and we were unable to recover it. 00:27:58.841 [2024-11-19 09:29:59.675368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.841 [2024-11-19 09:29:59.675425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.841 [2024-11-19 09:29:59.675439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.841 [2024-11-19 09:29:59.675446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.841 [2024-11-19 09:29:59.675452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.841 [2024-11-19 09:29:59.675466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.841 qpair failed and we were unable to recover it. 00:27:58.841 [2024-11-19 09:29:59.685499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.841 [2024-11-19 09:29:59.685558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.841 [2024-11-19 09:29:59.685573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.841 [2024-11-19 09:29:59.685580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.685586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.685601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.695498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.695577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.695595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.695602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.695608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.695622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.705533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.705589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.705604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.705611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.705617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.705631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.715493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.715552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.715566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.715573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.715579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.715594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.725573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.725629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.725646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.725652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.725659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.725673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.735614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.735671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.735687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.735693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.735699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.735717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.745667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.745729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.745744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.745751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.745757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.745772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.755679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.755735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.755750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.755756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.755763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.755777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.765723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.765779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.765794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.765800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.765807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.765822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.842 qpair failed and we were unable to recover it. 00:27:58.842 [2024-11-19 09:29:59.775775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.842 [2024-11-19 09:29:59.775879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.842 [2024-11-19 09:29:59.775894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.842 [2024-11-19 09:29:59.775900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.842 [2024-11-19 09:29:59.775906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.842 [2024-11-19 09:29:59.775921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.785757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.785809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.785824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.785831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.785837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.785852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.795803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.795857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.795872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.795879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.795885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.795900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.805816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.805869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.805884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.805891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.805897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.805911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.815868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.815924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.815939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.815945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.815956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.815971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.825870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.825927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.825944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.825955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.825961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.825975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.835896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.835955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.835969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.835976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.835982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.835996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.845938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.845998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.846013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.846019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.846025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.846040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.855974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.856024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.856039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.856045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.856051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.856066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.866003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.843 [2024-11-19 09:29:59.866058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.843 [2024-11-19 09:29:59.866073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.843 [2024-11-19 09:29:59.866081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.843 [2024-11-19 09:29:59.866090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.843 [2024-11-19 09:29:59.866104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.843 qpair failed and we were unable to recover it. 00:27:58.843 [2024-11-19 09:29:59.876050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.844 [2024-11-19 09:29:59.876099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.844 [2024-11-19 09:29:59.876113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.844 [2024-11-19 09:29:59.876120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.844 [2024-11-19 09:29:59.876126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.844 [2024-11-19 09:29:59.876141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.844 qpair failed and we were unable to recover it. 00:27:58.844 [2024-11-19 09:29:59.886058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:58.844 [2024-11-19 09:29:59.886115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:58.844 [2024-11-19 09:29:59.886129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:58.844 [2024-11-19 09:29:59.886136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:58.844 [2024-11-19 09:29:59.886142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:58.844 [2024-11-19 09:29:59.886157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:58.844 qpair failed and we were unable to recover it. 00:27:59.103 [2024-11-19 09:29:59.896065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.103 [2024-11-19 09:29:59.896127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.103 [2024-11-19 09:29:59.896142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.103 [2024-11-19 09:29:59.896148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.103 [2024-11-19 09:29:59.896154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.103 [2024-11-19 09:29:59.896170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.103 qpair failed and we were unable to recover it. 00:27:59.103 [2024-11-19 09:29:59.906118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.103 [2024-11-19 09:29:59.906178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.103 [2024-11-19 09:29:59.906192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.103 [2024-11-19 09:29:59.906200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.103 [2024-11-19 09:29:59.906206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.103 [2024-11-19 09:29:59.906220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.103 qpair failed and we were unable to recover it. 00:27:59.103 [2024-11-19 09:29:59.916144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.103 [2024-11-19 09:29:59.916199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.103 [2024-11-19 09:29:59.916213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.103 [2024-11-19 09:29:59.916220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.103 [2024-11-19 09:29:59.916226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.103 [2024-11-19 09:29:59.916241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.103 qpair failed and we were unable to recover it. 00:27:59.103 [2024-11-19 09:29:59.926191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.103 [2024-11-19 09:29:59.926260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.103 [2024-11-19 09:29:59.926274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.103 [2024-11-19 09:29:59.926281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.103 [2024-11-19 09:29:59.926287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.103 [2024-11-19 09:29:59.926302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.103 qpair failed and we were unable to recover it. 00:27:59.103 [2024-11-19 09:29:59.936208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.103 [2024-11-19 09:29:59.936262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.103 [2024-11-19 09:29:59.936277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.103 [2024-11-19 09:29:59.936284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.103 [2024-11-19 09:29:59.936290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.103 [2024-11-19 09:29:59.936304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.103 qpair failed and we were unable to recover it. 00:27:59.103 [2024-11-19 09:29:59.946242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.103 [2024-11-19 09:29:59.946295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.103 [2024-11-19 09:29:59.946309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.103 [2024-11-19 09:29:59.946316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.103 [2024-11-19 09:29:59.946322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.103 [2024-11-19 09:29:59.946337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:29:59.956243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:29:59.956294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:29:59.956312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:29:59.956318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:29:59.956325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:29:59.956339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:29:59.966286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:29:59.966340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:29:59.966354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:29:59.966361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:29:59.966367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:29:59.966382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:29:59.976302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:29:59.976359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:29:59.976373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:29:59.976380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:29:59.976385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:29:59.976400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:29:59.986360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:29:59.986419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:29:59.986434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:29:59.986441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:29:59.986447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:29:59.986462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:29:59.996388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:29:59.996439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:29:59.996454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:29:59.996461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:29:59.996470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:29:59.996485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.006394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:30:00.006453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:30:00.006471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:30:00.006479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:30:00.006486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:30:00.006504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.016400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:30:00.016462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:30:00.016484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:30:00.016492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:30:00.016500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:30:00.016518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.026457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:30:00.026510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:30:00.026525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:30:00.026533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:30:00.026539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:30:00.026555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.036489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:30:00.036569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:30:00.036585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:30:00.036592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:30:00.036599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:30:00.036615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.046465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:30:00.046532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:30:00.046552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:30:00.046563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:30:00.046574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:30:00.046598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.056577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.104 [2024-11-19 09:30:00.056692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.104 [2024-11-19 09:30:00.056709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.104 [2024-11-19 09:30:00.056717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.104 [2024-11-19 09:30:00.056723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.104 [2024-11-19 09:30:00.056740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.104 qpair failed and we were unable to recover it. 00:27:59.104 [2024-11-19 09:30:00.066544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.066598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.066613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.066620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.066627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.066642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.076520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.076575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.076590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.076597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.076604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.076619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.086622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.086680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.086700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.086707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.086713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.086728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.096730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.096794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.096814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.096821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.096828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.096845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.106670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.106725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.106742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.106750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.106756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.106772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.116633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.116688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.116705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.116713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.116719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.116735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.126735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.126791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.126808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.126815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.126825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.126841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.136758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.136813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.136829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.136836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.136842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.136858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.146778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.105 [2024-11-19 09:30:00.146830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.105 [2024-11-19 09:30:00.146846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.105 [2024-11-19 09:30:00.146853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.105 [2024-11-19 09:30:00.146859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.105 [2024-11-19 09:30:00.146875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.105 qpair failed and we were unable to recover it. 00:27:59.105 [2024-11-19 09:30:00.156808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.365 [2024-11-19 09:30:00.156878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.365 [2024-11-19 09:30:00.156896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.365 [2024-11-19 09:30:00.156904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.365 [2024-11-19 09:30:00.156911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.365 [2024-11-19 09:30:00.156927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-11-19 09:30:00.166799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.365 [2024-11-19 09:30:00.166862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.365 [2024-11-19 09:30:00.166879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.365 [2024-11-19 09:30:00.166887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.365 [2024-11-19 09:30:00.166893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.365 [2024-11-19 09:30:00.166909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-11-19 09:30:00.176880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.365 [2024-11-19 09:30:00.176941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.365 [2024-11-19 09:30:00.176962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.365 [2024-11-19 09:30:00.176970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.365 [2024-11-19 09:30:00.176976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.365 [2024-11-19 09:30:00.176993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-11-19 09:30:00.186907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.365 [2024-11-19 09:30:00.187020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.365 [2024-11-19 09:30:00.187038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.365 [2024-11-19 09:30:00.187045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.365 [2024-11-19 09:30:00.187052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.365 [2024-11-19 09:30:00.187069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-11-19 09:30:00.196928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.365 [2024-11-19 09:30:00.197024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.365 [2024-11-19 09:30:00.197040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.365 [2024-11-19 09:30:00.197047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.365 [2024-11-19 09:30:00.197055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.365 [2024-11-19 09:30:00.197076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.365 qpair failed and we were unable to recover it. 00:27:59.365 [2024-11-19 09:30:00.207006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.365 [2024-11-19 09:30:00.207072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.365 [2024-11-19 09:30:00.207088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.207096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.207102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.207118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.217010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.217064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.217083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.217091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.217097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.217113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.227002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.227056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.227072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.227080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.227087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.227102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.237046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.237105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.237122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.237129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.237136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.237151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.247092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.247151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.247167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.247174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.247181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.247196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.257119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.257178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.257194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.257201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.257210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.257226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.267064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.267122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.267138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.267146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.267152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.267168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.277145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.277199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.277215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.277223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.277231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.277250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.287264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.287327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.287343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.287351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.287358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.287373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.297287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.297346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.297363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.297371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.297377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.297392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.307267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.307326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.307342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.307349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.307356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.307372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.317294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.317347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.317363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.317370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.317377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.317393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.327337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.327399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.327416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.366 [2024-11-19 09:30:00.327423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.366 [2024-11-19 09:30:00.327430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.366 [2024-11-19 09:30:00.327445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.366 qpair failed and we were unable to recover it. 00:27:59.366 [2024-11-19 09:30:00.337290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.366 [2024-11-19 09:30:00.337373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.366 [2024-11-19 09:30:00.337389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.337397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.337403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.337419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.347314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.347368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.347388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.347395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.347401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.347419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.357331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.357389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.357405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.357412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.357418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.357434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.367399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.367457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.367473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.367480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.367486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.367502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.377496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.377564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.377581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.377588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.377594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.377610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.387411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.387467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.387485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.387492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.387502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.387519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.397523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.397573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.397589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.397597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.397603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.397619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.407574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.407629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.407646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.407653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.407659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.407675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.367 [2024-11-19 09:30:00.417574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.367 [2024-11-19 09:30:00.417631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.367 [2024-11-19 09:30:00.417647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.367 [2024-11-19 09:30:00.417654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.367 [2024-11-19 09:30:00.417662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.367 [2024-11-19 09:30:00.417682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.367 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.427533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.427620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.427637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.427644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.427650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.427665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.437635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.437689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.437705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.437713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.437719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.437735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.447591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.447668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.447685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.447692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.447698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.447716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.457641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.457727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.457744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.457751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.457757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.457774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.467710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.467765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.467782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.467790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.467796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.467812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.477775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.477879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.477899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.477906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.477912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.477928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.487776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.487836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.487852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.487860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.627 [2024-11-19 09:30:00.487867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.627 [2024-11-19 09:30:00.487884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.627 qpair failed and we were unable to recover it. 00:27:59.627 [2024-11-19 09:30:00.497787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.627 [2024-11-19 09:30:00.497845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.627 [2024-11-19 09:30:00.497861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.627 [2024-11-19 09:30:00.497869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.497875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.497891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.507836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.507890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.507906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.507913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.507920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.507936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.517878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.517969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.517987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.517994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.518003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.518019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.527835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.527893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.527912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.527920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.527926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.527943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.537848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.537906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.537923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.537930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.537937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.537960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.547873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.547930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.547951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.547960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.547966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.547982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.557960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.558016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.558037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.558047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.558054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.558072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.568017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.568100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.568117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.568124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.568130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.568147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.578001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.578059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.578075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.578082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.578088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.578104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.588043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.588098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.588115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.588122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.588128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.588145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.598023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.598080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.598097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.598105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.598112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.598128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.608147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.608204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.608225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.608233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.608240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.608256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.618155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.618210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.618226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.628 [2024-11-19 09:30:00.618234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.628 [2024-11-19 09:30:00.618241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.628 [2024-11-19 09:30:00.618257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.628 qpair failed and we were unable to recover it. 00:27:59.628 [2024-11-19 09:30:00.628105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.628 [2024-11-19 09:30:00.628163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.628 [2024-11-19 09:30:00.628179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.629 [2024-11-19 09:30:00.628187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.629 [2024-11-19 09:30:00.628193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.629 [2024-11-19 09:30:00.628208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.629 qpair failed and we were unable to recover it. 00:27:59.629 [2024-11-19 09:30:00.638131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.629 [2024-11-19 09:30:00.638191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.629 [2024-11-19 09:30:00.638208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.629 [2024-11-19 09:30:00.638215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.629 [2024-11-19 09:30:00.638221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.629 [2024-11-19 09:30:00.638238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.629 qpair failed and we were unable to recover it. 00:27:59.629 [2024-11-19 09:30:00.648285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.629 [2024-11-19 09:30:00.648344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.629 [2024-11-19 09:30:00.648360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.629 [2024-11-19 09:30:00.648368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.629 [2024-11-19 09:30:00.648377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.629 [2024-11-19 09:30:00.648393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.629 qpair failed and we were unable to recover it. 00:27:59.629 [2024-11-19 09:30:00.658283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.629 [2024-11-19 09:30:00.658337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.629 [2024-11-19 09:30:00.658353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.629 [2024-11-19 09:30:00.658361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.629 [2024-11-19 09:30:00.658368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.629 [2024-11-19 09:30:00.658384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.629 qpair failed and we were unable to recover it. 00:27:59.629 [2024-11-19 09:30:00.668323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.629 [2024-11-19 09:30:00.668410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.629 [2024-11-19 09:30:00.668427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.629 [2024-11-19 09:30:00.668434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.629 [2024-11-19 09:30:00.668441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.629 [2024-11-19 09:30:00.668457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.629 qpair failed and we were unable to recover it. 00:27:59.629 [2024-11-19 09:30:00.678267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.629 [2024-11-19 09:30:00.678328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.629 [2024-11-19 09:30:00.678346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.629 [2024-11-19 09:30:00.678354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.629 [2024-11-19 09:30:00.678361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.629 [2024-11-19 09:30:00.678378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.629 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.688387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.688449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.688466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.688474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.688480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.688496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.698334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.698428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.698444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.698451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.698458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.698473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.708448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.708507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.708524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.708532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.708538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.708555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.718371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.718426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.718443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.718450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.718456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.718472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.728485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.728541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.728558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.728566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.728573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.728590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.738437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.738496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.738516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.738524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.738530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.738546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.748488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.748545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.748562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.748569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.748575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.889 [2024-11-19 09:30:00.748591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.889 qpair failed and we were unable to recover it. 00:27:59.889 [2024-11-19 09:30:00.758551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.889 [2024-11-19 09:30:00.758610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.889 [2024-11-19 09:30:00.758627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.889 [2024-11-19 09:30:00.758635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.889 [2024-11-19 09:30:00.758642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.758658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.768519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.768575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.768591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.768599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.768605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.768621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.778619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.778677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.778696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.778707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.778718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.778735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.788660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.788713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.788730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.788737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.788743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.788760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.798637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.798706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.798722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.798729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.798736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.798752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.808631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.808690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.808707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.808715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.808721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.808737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.818725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.818779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.818796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.818804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.818810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.818826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.828671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.828727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.828744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.828753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.828759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.828775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.838803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.838886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.838903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.838911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.838917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.838932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.848822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.848882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.848899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.848906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.848912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.848929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.858851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.858912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.858929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.858937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.858943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.858962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.868861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.868917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.868937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.868944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.868957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.868974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.878906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.878967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.878984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.878992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.878998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.890 [2024-11-19 09:30:00.879014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.890 qpair failed and we were unable to recover it. 00:27:59.890 [2024-11-19 09:30:00.888938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.890 [2024-11-19 09:30:00.888999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.890 [2024-11-19 09:30:00.889016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.890 [2024-11-19 09:30:00.889024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.890 [2024-11-19 09:30:00.889030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.891 [2024-11-19 09:30:00.889046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-11-19 09:30:00.898964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.891 [2024-11-19 09:30:00.899039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.891 [2024-11-19 09:30:00.899056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.891 [2024-11-19 09:30:00.899063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.891 [2024-11-19 09:30:00.899070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.891 [2024-11-19 09:30:00.899085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-11-19 09:30:00.908998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.891 [2024-11-19 09:30:00.909055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.891 [2024-11-19 09:30:00.909071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.891 [2024-11-19 09:30:00.909078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.891 [2024-11-19 09:30:00.909087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.891 [2024-11-19 09:30:00.909103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-11-19 09:30:00.919022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.891 [2024-11-19 09:30:00.919078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.891 [2024-11-19 09:30:00.919095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.891 [2024-11-19 09:30:00.919102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.891 [2024-11-19 09:30:00.919108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.891 [2024-11-19 09:30:00.919124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-11-19 09:30:00.929076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.891 [2024-11-19 09:30:00.929146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.891 [2024-11-19 09:30:00.929162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.891 [2024-11-19 09:30:00.929170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.891 [2024-11-19 09:30:00.929176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.891 [2024-11-19 09:30:00.929191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.891 qpair failed and we were unable to recover it. 00:27:59.891 [2024-11-19 09:30:00.939023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:59.891 [2024-11-19 09:30:00.939087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:59.891 [2024-11-19 09:30:00.939102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:59.891 [2024-11-19 09:30:00.939110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:59.891 [2024-11-19 09:30:00.939116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:27:59.891 [2024-11-19 09:30:00.939131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:59.891 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:00.949080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:00.949134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:00.949149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:00.949155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:00.949162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:00.949176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:00.959149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:00.959204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:00.959218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:00.959225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:00.959232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:00.959247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:00.969238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:00.969337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:00.969352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:00.969358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:00.969364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:00.969380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:00.979199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:00.979254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:00.979268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:00.979275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:00.979281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:00.979296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:00.989211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:00.989294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:00.989309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:00.989316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:00.989322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:00.989336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:00.999234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:00.999281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:00.999299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:00.999305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:00.999311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:00.999326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:01.009253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:01.009314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:01.009328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:01.009336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:01.009343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:01.009357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:01.019303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:01.019359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:01.019374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:01.019381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:01.019387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:01.019402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:01.029323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:01.029377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:01.029392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:01.029399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:01.029405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:01.029420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:01.039360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:01.039415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:01.039429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:01.039436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:01.039446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:01.039461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:01.049397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:01.049475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:01.049489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:01.049496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:01.049502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.151 [2024-11-19 09:30:01.049517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.151 qpair failed and we were unable to recover it. 00:28:00.151 [2024-11-19 09:30:01.059416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.151 [2024-11-19 09:30:01.059471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.151 [2024-11-19 09:30:01.059485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.151 [2024-11-19 09:30:01.059492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.151 [2024-11-19 09:30:01.059498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.059513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.069441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.069545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.069559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.069566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.069573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.069588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.079465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.079518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.079533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.079540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.079546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.079560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.089521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.089585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.089600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.089607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.089613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.089628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.099462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.099515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.099530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.099537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.099543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.099557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.109555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.109611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.109626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.109633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.109640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.109655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.119575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.119631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.119645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.119652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.119659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.119673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.129587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.129656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.129674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.129680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.129686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.129701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.139635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.139708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.139723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.139730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.139736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.139751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.149668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.149720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.149734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.149741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.149747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.149762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.159738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.159822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.159836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.159843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.159849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.159863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.169655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.169712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.169727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.169734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.169743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.169758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.179742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.179799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.179814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.179821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.152 [2024-11-19 09:30:01.179827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.152 [2024-11-19 09:30:01.179841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.152 qpair failed and we were unable to recover it. 00:28:00.152 [2024-11-19 09:30:01.189770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.152 [2024-11-19 09:30:01.189822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.152 [2024-11-19 09:30:01.189837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.152 [2024-11-19 09:30:01.189845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.153 [2024-11-19 09:30:01.189851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.153 [2024-11-19 09:30:01.189867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.153 qpair failed and we were unable to recover it. 00:28:00.153 [2024-11-19 09:30:01.199799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.153 [2024-11-19 09:30:01.199850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.153 [2024-11-19 09:30:01.199864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.153 [2024-11-19 09:30:01.199872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.153 [2024-11-19 09:30:01.199877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.153 [2024-11-19 09:30:01.199892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.153 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.209851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.209954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.209970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.209976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.209983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.209997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.219795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.219856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.219871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.219878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.219883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.219898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.229854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.229904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.229919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.229926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.229932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.229946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.239901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.239962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.239976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.239983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.239989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.240004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.249938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.249998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.250013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.250020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.250026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.250041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.259964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.260018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.260036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.260043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.260049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.260064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.269988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.270044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.270067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.270074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.270080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.270098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.280025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.280078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.280093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.280099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.280105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.280120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.290056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.290115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.290130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.290138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.290144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.290159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.300018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.300114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.300128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.300135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.300145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.300160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.310124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.310179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.310194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.310200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.310207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.310221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.320140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.320192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.320207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.320214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.320220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.412 [2024-11-19 09:30:01.320234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.412 qpair failed and we were unable to recover it. 00:28:00.412 [2024-11-19 09:30:01.330181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.412 [2024-11-19 09:30:01.330237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.412 [2024-11-19 09:30:01.330251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.412 [2024-11-19 09:30:01.330257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.412 [2024-11-19 09:30:01.330264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.330279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.340206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.340256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.340270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.340277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.340284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.340298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.350227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.350283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.350298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.350304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.350310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.350325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.360261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.360314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.360329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.360336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.360342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.360357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.370225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.370279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.370294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.370300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.370309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.370323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.380284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.380375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.380391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.380398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.380405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.380420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.390355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.390409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.390428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.390435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.390441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.390456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.400373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.400423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.400437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.400445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.400451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.400465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.410423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.410479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.410494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.410500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.410507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.410521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.420461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.420529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.420542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.420549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.420555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.420569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.430507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.430558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.430572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.430578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.430588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.430602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.440487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.440582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.440597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.440603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.440609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.440624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.450484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.450539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.450553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.450560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.450566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.413 [2024-11-19 09:30:01.450580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.413 qpair failed and we were unable to recover it. 00:28:00.413 [2024-11-19 09:30:01.460492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.413 [2024-11-19 09:30:01.460551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.413 [2024-11-19 09:30:01.460566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.413 [2024-11-19 09:30:01.460573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.413 [2024-11-19 09:30:01.460579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.414 [2024-11-19 09:30:01.460594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.414 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.470638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.470693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.470707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.470714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.470721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.470735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.480695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.480802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.480817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.480824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.480831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.480845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.490656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.490716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.490731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.490738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.490744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.490759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.500675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.500731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.500746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.500752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.500759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.500773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.510710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.510759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.510774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.510781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.510787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.510802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.520695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.520761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.520779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.520786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.520792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.520807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.530765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.530825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.530842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.530849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.530855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.530872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.540786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.540835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.540850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.540856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.540863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.540878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.673 [2024-11-19 09:30:01.550814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.673 [2024-11-19 09:30:01.550903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.673 [2024-11-19 09:30:01.550917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.673 [2024-11-19 09:30:01.550925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.673 [2024-11-19 09:30:01.550931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.673 [2024-11-19 09:30:01.550950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.673 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.560831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.560886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.560902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.560912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.560919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.560934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.570893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.570961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.570976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.570983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.570988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.571003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.580914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.580975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.580991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.580998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.581004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.581019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.590971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.591074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.591090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.591097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.591103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.591119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.600943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.601003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.601016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.601023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.601030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.601044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.610940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.611003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.611020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.611027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.611034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.611049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.621013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.621069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.621084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.621091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.621098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.621113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.630965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.631017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.631031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.631038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.631044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.631059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.641102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.641163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.641177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.641184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.641190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.641205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.651176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.651247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.651265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.651273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.651278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.651294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.661088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.661145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.661159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.661166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.661172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.661186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.671126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.671183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.671197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.671204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.671211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.671225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.674 [2024-11-19 09:30:01.681106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.674 [2024-11-19 09:30:01.681158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.674 [2024-11-19 09:30:01.681173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.674 [2024-11-19 09:30:01.681180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.674 [2024-11-19 09:30:01.681186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.674 [2024-11-19 09:30:01.681200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.674 qpair failed and we were unable to recover it. 00:28:00.675 [2024-11-19 09:30:01.691199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.675 [2024-11-19 09:30:01.691255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.675 [2024-11-19 09:30:01.691270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.675 [2024-11-19 09:30:01.691281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.675 [2024-11-19 09:30:01.691287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.675 [2024-11-19 09:30:01.691303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.675 qpair failed and we were unable to recover it. 00:28:00.675 [2024-11-19 09:30:01.701255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.675 [2024-11-19 09:30:01.701314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.675 [2024-11-19 09:30:01.701329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.675 [2024-11-19 09:30:01.701336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.675 [2024-11-19 09:30:01.701342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.675 [2024-11-19 09:30:01.701357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.675 qpair failed and we were unable to recover it. 00:28:00.675 [2024-11-19 09:30:01.711283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.675 [2024-11-19 09:30:01.711341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.675 [2024-11-19 09:30:01.711357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.675 [2024-11-19 09:30:01.711364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.675 [2024-11-19 09:30:01.711370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.675 [2024-11-19 09:30:01.711384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.675 qpair failed and we were unable to recover it. 00:28:00.675 [2024-11-19 09:30:01.721290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.675 [2024-11-19 09:30:01.721342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.675 [2024-11-19 09:30:01.721356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.675 [2024-11-19 09:30:01.721364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.675 [2024-11-19 09:30:01.721370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.675 [2024-11-19 09:30:01.721384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.675 qpair failed and we were unable to recover it. 00:28:00.934 [2024-11-19 09:30:01.731349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.934 [2024-11-19 09:30:01.731408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.934 [2024-11-19 09:30:01.731422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.934 [2024-11-19 09:30:01.731430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.934 [2024-11-19 09:30:01.731436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.934 [2024-11-19 09:30:01.731451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.934 qpair failed and we were unable to recover it. 00:28:00.934 [2024-11-19 09:30:01.741310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.934 [2024-11-19 09:30:01.741372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.934 [2024-11-19 09:30:01.741387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.934 [2024-11-19 09:30:01.741394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.934 [2024-11-19 09:30:01.741400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.934 [2024-11-19 09:30:01.741415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.934 qpair failed and we were unable to recover it. 00:28:00.934 [2024-11-19 09:30:01.751396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.934 [2024-11-19 09:30:01.751449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.934 [2024-11-19 09:30:01.751463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.934 [2024-11-19 09:30:01.751470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.934 [2024-11-19 09:30:01.751476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.934 [2024-11-19 09:30:01.751491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.934 qpair failed and we were unable to recover it. 00:28:00.935 [2024-11-19 09:30:01.761354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.935 [2024-11-19 09:30:01.761408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.935 [2024-11-19 09:30:01.761423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.935 [2024-11-19 09:30:01.761430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.935 [2024-11-19 09:30:01.761436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.935 [2024-11-19 09:30:01.761451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.935 qpair failed and we were unable to recover it. 00:28:00.935 [2024-11-19 09:30:01.771454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.935 [2024-11-19 09:30:01.771515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.935 [2024-11-19 09:30:01.771530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.935 [2024-11-19 09:30:01.771537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.935 [2024-11-19 09:30:01.771543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22f6ba0 00:28:00.935 [2024-11-19 09:30:01.771558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:00.935 qpair failed and we were unable to recover it. 00:28:00.935 [2024-11-19 09:30:01.781485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.935 [2024-11-19 09:30:01.781584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.935 [2024-11-19 09:30:01.781652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.935 [2024-11-19 09:30:01.781678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.935 [2024-11-19 09:30:01.781700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae9c000b90 00:28:00.935 [2024-11-19 09:30:01.781756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.935 qpair failed and we were unable to recover it. 00:28:00.935 [2024-11-19 09:30:01.791521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:00.935 [2024-11-19 09:30:01.791635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:00.935 [2024-11-19 09:30:01.791664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:00.935 [2024-11-19 09:30:01.791679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:00.935 [2024-11-19 09:30:01.791693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae9c000b90 00:28:00.935 [2024-11-19 09:30:01.791724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:00.935 qpair failed and we were unable to recover it. 00:28:00.935 [2024-11-19 09:30:01.791825] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:00.935 A controller has encountered a failure and is being reset. 00:28:00.935 [2024-11-19 09:30:01.791932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2304af0 (9): Bad file descriptor 00:28:00.935 Controller properly reset. 00:28:00.935 Initializing NVMe Controllers 00:28:00.935 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:00.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:00.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:00.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:00.935 Initialization complete. Launching workers. 00:28:00.935 Starting thread on core 1 00:28:00.935 Starting thread on core 2 00:28:00.935 Starting thread on core 3 00:28:00.935 Starting thread on core 0 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:00.935 00:28:00.935 real 0m10.799s 00:28:00.935 user 0m19.577s 00:28:00.935 sys 0m4.670s 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.935 ************************************ 00:28:00.935 END TEST nvmf_target_disconnect_tc2 00:28:00.935 ************************************ 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.935 09:30:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:00.935 rmmod nvme_tcp 00:28:00.935 rmmod nvme_fabrics 00:28:01.194 rmmod nvme_keyring 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1272932 ']' 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1272932 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1272932 ']' 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1272932 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1272932 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1272932' 00:28:01.194 killing process with pid 1272932 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1272932 00:28:01.194 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1272932 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.452 09:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.358 09:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.358 00:28:03.358 real 0m19.588s 00:28:03.358 user 0m47.305s 00:28:03.358 sys 0m9.587s 00:28:03.358 09:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:03.358 09:30:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:03.358 ************************************ 00:28:03.358 END TEST nvmf_target_disconnect 00:28:03.358 ************************************ 00:28:03.358 09:30:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:03.358 00:28:03.358 real 5m51.998s 00:28:03.358 user 10m33.949s 00:28:03.358 sys 1m58.199s 00:28:03.358 09:30:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:03.358 09:30:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.358 ************************************ 00:28:03.358 END TEST nvmf_host 00:28:03.358 ************************************ 00:28:03.617 09:30:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:03.617 09:30:04 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:03.617 09:30:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:03.617 09:30:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:03.617 09:30:04 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:03.617 09:30:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.617 ************************************ 00:28:03.617 START TEST nvmf_target_core_interrupt_mode 00:28:03.617 ************************************ 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:03.617 * Looking for test storage... 00:28:03.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.617 --rc genhtml_branch_coverage=1 00:28:03.617 --rc genhtml_function_coverage=1 00:28:03.617 --rc genhtml_legend=1 00:28:03.617 --rc geninfo_all_blocks=1 00:28:03.617 --rc geninfo_unexecuted_blocks=1 00:28:03.617 00:28:03.617 ' 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.617 --rc genhtml_branch_coverage=1 00:28:03.617 --rc genhtml_function_coverage=1 00:28:03.617 --rc genhtml_legend=1 00:28:03.617 --rc geninfo_all_blocks=1 00:28:03.617 --rc geninfo_unexecuted_blocks=1 00:28:03.617 00:28:03.617 ' 00:28:03.617 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:03.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.617 --rc genhtml_branch_coverage=1 00:28:03.617 --rc genhtml_function_coverage=1 00:28:03.617 --rc genhtml_legend=1 00:28:03.618 --rc geninfo_all_blocks=1 00:28:03.618 --rc geninfo_unexecuted_blocks=1 00:28:03.618 00:28:03.618 ' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:03.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.618 --rc genhtml_branch_coverage=1 00:28:03.618 --rc genhtml_function_coverage=1 00:28:03.618 --rc genhtml_legend=1 00:28:03.618 --rc geninfo_all_blocks=1 00:28:03.618 --rc geninfo_unexecuted_blocks=1 00:28:03.618 00:28:03.618 ' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:03.618 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:03.877 ************************************ 00:28:03.877 START TEST nvmf_abort 00:28:03.877 ************************************ 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:03.877 * Looking for test storage... 00:28:03.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:03.877 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.878 --rc genhtml_branch_coverage=1 00:28:03.878 --rc genhtml_function_coverage=1 00:28:03.878 --rc genhtml_legend=1 00:28:03.878 --rc geninfo_all_blocks=1 00:28:03.878 --rc geninfo_unexecuted_blocks=1 00:28:03.878 00:28:03.878 ' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.878 --rc genhtml_branch_coverage=1 00:28:03.878 --rc genhtml_function_coverage=1 00:28:03.878 --rc genhtml_legend=1 00:28:03.878 --rc geninfo_all_blocks=1 00:28:03.878 --rc geninfo_unexecuted_blocks=1 00:28:03.878 00:28:03.878 ' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.878 --rc genhtml_branch_coverage=1 00:28:03.878 --rc genhtml_function_coverage=1 00:28:03.878 --rc genhtml_legend=1 00:28:03.878 --rc geninfo_all_blocks=1 00:28:03.878 --rc geninfo_unexecuted_blocks=1 00:28:03.878 00:28:03.878 ' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.878 --rc genhtml_branch_coverage=1 00:28:03.878 --rc genhtml_function_coverage=1 00:28:03.878 --rc genhtml_legend=1 00:28:03.878 --rc geninfo_all_blocks=1 00:28:03.878 --rc geninfo_unexecuted_blocks=1 00:28:03.878 00:28:03.878 ' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.878 09:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:10.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:10.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:10.447 Found net devices under 0000:86:00.0: cvl_0_0 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:10.447 Found net devices under 0000:86:00.1: cvl_0_1 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.447 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:28:10.448 00:28:10.448 --- 10.0.0.2 ping statistics --- 00:28:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.448 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:10.448 00:28:10.448 --- 10.0.0.1 ping statistics --- 00:28:10.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.448 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1278046 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1278046 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1278046 ']' 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 [2024-11-19 09:30:10.778787] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:10.448 [2024-11-19 09:30:10.779739] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:28:10.448 [2024-11-19 09:30:10.779773] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.448 [2024-11-19 09:30:10.860131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:10.448 [2024-11-19 09:30:10.902333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.448 [2024-11-19 09:30:10.902371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.448 [2024-11-19 09:30:10.902378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:10.448 [2024-11-19 09:30:10.902385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:10.448 [2024-11-19 09:30:10.902390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.448 [2024-11-19 09:30:10.903931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.448 [2024-11-19 09:30:10.903845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.448 [2024-11-19 09:30:10.903931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.448 [2024-11-19 09:30:10.970956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:10.448 [2024-11-19 09:30:10.971852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:10.448 [2024-11-19 09:30:10.972163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:10.448 [2024-11-19 09:30:10.972282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:10.448 09:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 [2024-11-19 09:30:11.040641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 Malloc0 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 Delay0 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 [2024-11-19 09:30:11.128586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.448 09:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:10.448 [2024-11-19 09:30:11.212259] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:12.349 Initializing NVMe Controllers 00:28:12.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:12.349 controller IO queue size 128 less than required 00:28:12.349 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:12.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:12.349 Initialization complete. Launching workers. 00:28:12.349 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37546 00:28:12.349 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37607, failed to submit 66 00:28:12.349 success 37546, unsuccessful 61, failed 0 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.349 rmmod nvme_tcp 00:28:12.349 rmmod nvme_fabrics 00:28:12.349 rmmod nvme_keyring 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1278046 ']' 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1278046 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1278046 ']' 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1278046 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:12.349 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1278046 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1278046' 00:28:12.608 killing process with pid 1278046 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1278046 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1278046 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.608 09:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:15.219 00:28:15.219 real 0m10.970s 00:28:15.219 user 0m10.242s 00:28:15.219 sys 0m5.511s 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.219 ************************************ 00:28:15.219 END TEST nvmf_abort 00:28:15.219 ************************************ 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:15.219 ************************************ 00:28:15.219 START TEST nvmf_ns_hotplug_stress 00:28:15.219 ************************************ 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:15.219 * Looking for test storage... 00:28:15.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.219 --rc genhtml_branch_coverage=1 00:28:15.219 --rc genhtml_function_coverage=1 00:28:15.219 --rc genhtml_legend=1 00:28:15.219 --rc geninfo_all_blocks=1 00:28:15.219 --rc geninfo_unexecuted_blocks=1 00:28:15.219 00:28:15.219 ' 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.219 --rc genhtml_branch_coverage=1 00:28:15.219 --rc genhtml_function_coverage=1 00:28:15.219 --rc genhtml_legend=1 00:28:15.219 --rc geninfo_all_blocks=1 00:28:15.219 --rc geninfo_unexecuted_blocks=1 00:28:15.219 00:28:15.219 ' 00:28:15.219 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:15.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.219 --rc genhtml_branch_coverage=1 00:28:15.219 --rc genhtml_function_coverage=1 00:28:15.220 --rc genhtml_legend=1 00:28:15.220 --rc geninfo_all_blocks=1 00:28:15.220 --rc geninfo_unexecuted_blocks=1 00:28:15.220 00:28:15.220 ' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:15.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:15.220 --rc genhtml_branch_coverage=1 00:28:15.220 --rc genhtml_function_coverage=1 00:28:15.220 --rc genhtml_legend=1 00:28:15.220 --rc geninfo_all_blocks=1 00:28:15.220 --rc geninfo_unexecuted_blocks=1 00:28:15.220 00:28:15.220 ' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:15.220 09:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:21.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.789 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:21.790 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:21.790 Found net devices under 0000:86:00.0: cvl_0_0 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:21.790 Found net devices under 0000:86:00.1: cvl_0_1 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:28:21.790 00:28:21.790 --- 10.0.0.2 ping statistics --- 00:28:21.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.790 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:21.790 00:28:21.790 --- 10.0.0.1 ping statistics --- 00:28:21.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.790 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1281976 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1281976 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1281976 ']' 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:21.790 09:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:21.790 [2024-11-19 09:30:21.921212] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:21.790 [2024-11-19 09:30:21.922145] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:28:21.791 [2024-11-19 09:30:21.922179] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.791 [2024-11-19 09:30:22.001846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:21.791 [2024-11-19 09:30:22.042216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.791 [2024-11-19 09:30:22.042250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.791 [2024-11-19 09:30:22.042258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.791 [2024-11-19 09:30:22.042264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.791 [2024-11-19 09:30:22.042271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.791 [2024-11-19 09:30:22.043722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.791 [2024-11-19 09:30:22.043825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.791 [2024-11-19 09:30:22.043818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.791 [2024-11-19 09:30:22.111490] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:21.791 [2024-11-19 09:30:22.112326] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:21.791 [2024-11-19 09:30:22.112585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:21.791 [2024-11-19 09:30:22.112723] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:21.791 [2024-11-19 09:30:22.376819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.791 [2024-11-19 09:30:22.801292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.791 09:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:22.050 09:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:22.309 Malloc0 00:28:22.309 09:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:22.567 Delay0 00:28:22.567 09:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.825 09:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:22.825 NULL1 00:28:22.826 09:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:23.083 09:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1282442 00:28:23.083 09:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:23.083 09:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:23.083 09:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.456 Read completed with error (sct=0, sc=11) 00:28:24.457 09:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:24.457 09:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:24.457 09:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:24.714 true 00:28:24.714 09:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:24.714 09:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.647 09:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:25.647 09:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:25.647 09:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:25.905 true 00:28:25.905 09:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:25.905 09:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.163 09:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.420 09:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:26.420 09:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:26.420 true 00:28:26.420 09:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:26.420 09:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:27.791 09:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.791 09:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:27.791 09:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:27.791 true 00:28:28.049 09:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:28.049 09:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.049 09:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.307 09:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:28.307 09:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:28.563 true 00:28:28.563 09:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:28.563 09:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.755 09:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:29.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:29.755 09:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:29.755 09:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:30.013 true 00:28:30.013 09:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:30.013 09:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.945 09:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.203 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:31.203 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:31.203 true 00:28:31.203 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:31.203 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.461 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.718 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:31.719 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:31.976 true 00:28:31.976 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:31.976 09:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:32.909 09:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.167 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:33.167 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:33.167 true 00:28:33.425 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:33.425 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.425 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.683 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:33.683 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:33.941 true 00:28:33.941 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:33.941 09:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:34.874 09:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.874 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.132 09:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:35.132 09:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:35.389 true 00:28:35.389 09:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:35.389 09:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.323 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.323 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:36.323 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:36.580 true 00:28:36.581 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:36.581 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.838 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.096 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:37.096 09:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:37.096 true 00:28:37.353 09:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:37.353 09:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.286 09:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.544 09:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:38.544 09:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:38.544 true 00:28:38.544 09:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:38.544 09:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.801 09:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.058 09:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:39.058 09:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:39.316 true 00:28:39.316 09:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:39.316 09:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.250 09:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.508 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.508 09:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:40.508 09:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:40.766 true 00:28:40.766 09:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:40.766 09:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.700 09:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.700 09:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:41.700 09:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:41.958 true 00:28:41.958 09:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:41.958 09:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.215 09:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.473 09:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:42.473 09:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:42.473 true 00:28:42.731 09:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:42.731 09:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.664 09:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.922 09:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:43.922 09:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:43.922 true 00:28:43.922 09:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:43.922 09:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.180 09:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.437 09:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:44.437 09:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:44.695 true 00:28:44.695 09:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:44.695 09:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.628 09:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:45.888 09:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:45.888 09:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:46.146 true 00:28:46.146 09:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:46.146 09:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.079 09:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.079 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:47.079 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:47.337 true 00:28:47.337 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:47.337 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.595 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.852 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:47.852 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:47.852 true 00:28:48.110 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:48.110 09:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.041 09:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:49.299 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:49.299 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:49.299 true 00:28:49.299 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:49.299 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.557 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:49.815 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:49.815 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:50.073 true 00:28:50.073 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:50.073 09:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 09:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.447 09:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:51.447 09:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:51.704 true 00:28:51.704 09:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:51.704 09:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.638 09:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.638 09:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:52.638 09:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:52.895 true 00:28:52.895 09:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:52.895 09:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.153 09:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.153 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:53.153 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:53.411 Initializing NVMe Controllers 00:28:53.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.411 Controller IO queue size 128, less than required. 00:28:53.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.411 Controller IO queue size 128, less than required. 00:28:53.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.411 Initialization complete. Launching workers. 00:28:53.411 ======================================================== 00:28:53.411 Latency(us) 00:28:53.411 Device Information : IOPS MiB/s Average min max 00:28:53.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1620.04 0.79 51004.35 2910.78 1064612.85 00:28:53.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16700.81 8.15 7663.76 1252.45 380583.53 00:28:53.411 ======================================================== 00:28:53.411 Total : 18320.85 8.95 11496.20 1252.45 1064612.85 00:28:53.411 00:28:53.411 true 00:28:53.411 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1282442 00:28:53.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1282442) - No such process 00:28:53.411 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1282442 00:28:53.411 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.670 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:53.929 null0 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:53.929 09:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:54.189 null1 00:28:54.189 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.189 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.189 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:54.447 null2 00:28:54.447 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.447 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.447 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:54.447 null3 00:28:54.447 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.447 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.447 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:54.705 null4 00:28:54.705 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.705 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.705 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:54.963 null5 00:28:54.963 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:54.963 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:54.963 09:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:55.222 null6 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:55.222 null7 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.222 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1287570 1287571 1287573 1287576 1287577 1287579 1287581 1287583 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:55.481 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:55.482 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:55.482 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:55.739 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.739 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.739 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:55.739 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.739 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.739 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:55.740 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:55.998 09:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:56.256 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.256 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.256 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.256 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:56.257 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:56.515 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.516 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:56.774 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.033 09:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.292 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:57.550 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:57.809 09:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.067 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.067 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.067 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.068 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.068 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.068 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.068 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.068 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.331 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.331 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.332 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.591 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.591 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.591 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.591 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.591 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.592 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:58.592 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.592 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.592 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.592 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.592 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:58.850 09:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.109 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:59.367 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.626 rmmod nvme_tcp 00:28:59.626 rmmod nvme_fabrics 00:28:59.626 rmmod nvme_keyring 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1281976 ']' 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1281976 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1281976 ']' 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1281976 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1281976 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1281976' 00:28:59.626 killing process with pid 1281976 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1281976 00:28:59.626 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1281976 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.885 09:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.791 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.050 00:29:02.050 real 0m47.107s 00:29:02.050 user 2m56.394s 00:29:02.050 sys 0m19.570s 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:02.050 ************************************ 00:29:02.050 END TEST nvmf_ns_hotplug_stress 00:29:02.050 ************************************ 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:02.050 ************************************ 00:29:02.050 START TEST nvmf_delete_subsystem 00:29:02.050 ************************************ 00:29:02.050 09:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:02.050 * Looking for test storage... 00:29:02.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.050 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.051 --rc genhtml_branch_coverage=1 00:29:02.051 --rc genhtml_function_coverage=1 00:29:02.051 --rc genhtml_legend=1 00:29:02.051 --rc geninfo_all_blocks=1 00:29:02.051 --rc geninfo_unexecuted_blocks=1 00:29:02.051 00:29:02.051 ' 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.051 --rc genhtml_branch_coverage=1 00:29:02.051 --rc genhtml_function_coverage=1 00:29:02.051 --rc genhtml_legend=1 00:29:02.051 --rc geninfo_all_blocks=1 00:29:02.051 --rc geninfo_unexecuted_blocks=1 00:29:02.051 00:29:02.051 ' 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.051 --rc genhtml_branch_coverage=1 00:29:02.051 --rc genhtml_function_coverage=1 00:29:02.051 --rc genhtml_legend=1 00:29:02.051 --rc geninfo_all_blocks=1 00:29:02.051 --rc geninfo_unexecuted_blocks=1 00:29:02.051 00:29:02.051 ' 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:02.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.051 --rc genhtml_branch_coverage=1 00:29:02.051 --rc genhtml_function_coverage=1 00:29:02.051 --rc genhtml_legend=1 00:29:02.051 --rc geninfo_all_blocks=1 00:29:02.051 --rc geninfo_unexecuted_blocks=1 00:29:02.051 00:29:02.051 ' 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.051 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.310 09:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:08.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:08.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:08.885 Found net devices under 0000:86:00.0: cvl_0_0 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.885 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:08.886 Found net devices under 0000:86:00.1: cvl_0_1 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:29:08.886 00:29:08.886 --- 10.0.0.2 ping statistics --- 00:29:08.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.886 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:08.886 00:29:08.886 --- 10.0.0.1 ping statistics --- 00:29:08.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.886 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.886 09:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1291933 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1291933 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1291933 ']' 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.886 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.886 [2024-11-19 09:31:09.092184] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:08.886 [2024-11-19 09:31:09.093227] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:29:08.886 [2024-11-19 09:31:09.093265] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.886 [2024-11-19 09:31:09.175277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:08.886 [2024-11-19 09:31:09.215247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.886 [2024-11-19 09:31:09.215284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.886 [2024-11-19 09:31:09.215293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.886 [2024-11-19 09:31:09.215299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.887 [2024-11-19 09:31:09.215304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.887 [2024-11-19 09:31:09.216546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.887 [2024-11-19 09:31:09.216548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.887 [2024-11-19 09:31:09.283899] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:08.887 [2024-11-19 09:31:09.284509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:08.887 [2024-11-19 09:31:09.284712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 [2024-11-19 09:31:09.361443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 [2024-11-19 09:31:09.385677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 NULL1 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 Delay0 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1291959 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:08.887 09:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:08.887 [2024-11-19 09:31:09.492917] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:10.381 09:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.381 09:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.382 09:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Write completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 starting I/O failed: -6 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.951 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 starting I/O failed: -6 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:10.952 Read completed with error (sct=0, sc=8) 00:29:10.952 Write completed with error (sct=0, sc=8) 00:29:11.888 [2024-11-19 09:31:12.710797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19549a0 is same with the state(6) to be set 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 [2024-11-19 09:31:12.734812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1953860 is same with the state(6) to be set 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 [2024-11-19 09:31:12.735021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19532c0 is same with the state(6) to be set 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 [2024-11-19 09:31:12.736249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f4800d020 is same with the state(6) to be set 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Write completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 Read completed with error (sct=0, sc=8) 00:29:11.888 [2024-11-19 09:31:12.736919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9f4800d680 is same with the state(6) to be set 00:29:11.888 Initializing NVMe Controllers 00:29:11.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.888 Controller IO queue size 128, less than required. 00:29:11.888 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:11.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:11.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:11.889 Initialization complete. Launching workers. 00:29:11.889 ======================================================== 00:29:11.889 Latency(us) 00:29:11.889 Device Information : IOPS MiB/s Average min max 00:29:11.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.17 0.09 897766.03 397.09 1007176.37 00:29:11.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.79 0.08 921093.97 241.95 1009893.00 00:29:11.889 ======================================================== 00:29:11.889 Total : 350.97 0.17 908520.05 241.95 1009893.00 00:29:11.889 00:29:11.889 [2024-11-19 09:31:12.737558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19549a0 (9): Bad file descriptor 00:29:11.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:11.889 09:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.889 09:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:11.889 09:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1291959 00:29:11.889 09:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1291959 00:29:12.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1291959) - No such process 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1291959 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1291959 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1291959 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.456 [2024-11-19 09:31:13.269679] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1292647 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:12.456 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:12.456 [2024-11-19 09:31:13.353648] subsystem.c:1787:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:13.022 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:13.022 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:13.022 09:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:13.281 09:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:13.281 09:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:13.281 09:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:13.845 09:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:13.845 09:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:13.845 09:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.410 09:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.410 09:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:14.410 09:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:14.975 09:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:14.975 09:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:14.975 09:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.541 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.541 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:15.541 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:15.541 Initializing NVMe Controllers 00:29:15.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.541 Controller IO queue size 128, less than required. 00:29:15.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:15.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:15.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:15.541 Initialization complete. Launching workers. 00:29:15.541 ======================================================== 00:29:15.541 Latency(us) 00:29:15.541 Device Information : IOPS MiB/s Average min max 00:29:15.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002098.63 1000121.21 1005863.85 00:29:15.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004518.58 1000207.30 1041889.12 00:29:15.541 ======================================================== 00:29:15.541 Total : 256.00 0.12 1003308.61 1000121.21 1041889.12 00:29:15.541 00:29:15.798 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:15.798 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1292647 00:29:15.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1292647) - No such process 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1292647 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.799 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.799 rmmod nvme_tcp 00:29:15.799 rmmod nvme_fabrics 00:29:16.057 rmmod nvme_keyring 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1291933 ']' 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1291933 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1291933 ']' 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1291933 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1291933 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1291933' 00:29:16.057 killing process with pid 1291933 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1291933 00:29:16.057 09:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1291933 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.057 09:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.593 00:29:18.593 real 0m16.253s 00:29:18.593 user 0m26.437s 00:29:18.593 sys 0m6.202s 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:18.593 ************************************ 00:29:18.593 END TEST nvmf_delete_subsystem 00:29:18.593 ************************************ 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.593 ************************************ 00:29:18.593 START TEST nvmf_host_management 00:29:18.593 ************************************ 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:18.593 * Looking for test storage... 00:29:18.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:18.593 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.594 --rc genhtml_branch_coverage=1 00:29:18.594 --rc genhtml_function_coverage=1 00:29:18.594 --rc genhtml_legend=1 00:29:18.594 --rc geninfo_all_blocks=1 00:29:18.594 --rc geninfo_unexecuted_blocks=1 00:29:18.594 00:29:18.594 ' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.594 --rc genhtml_branch_coverage=1 00:29:18.594 --rc genhtml_function_coverage=1 00:29:18.594 --rc genhtml_legend=1 00:29:18.594 --rc geninfo_all_blocks=1 00:29:18.594 --rc geninfo_unexecuted_blocks=1 00:29:18.594 00:29:18.594 ' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.594 --rc genhtml_branch_coverage=1 00:29:18.594 --rc genhtml_function_coverage=1 00:29:18.594 --rc genhtml_legend=1 00:29:18.594 --rc geninfo_all_blocks=1 00:29:18.594 --rc geninfo_unexecuted_blocks=1 00:29:18.594 00:29:18.594 ' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:18.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.594 --rc genhtml_branch_coverage=1 00:29:18.594 --rc genhtml_function_coverage=1 00:29:18.594 --rc genhtml_legend=1 00:29:18.594 --rc geninfo_all_blocks=1 00:29:18.594 --rc geninfo_unexecuted_blocks=1 00:29:18.594 00:29:18.594 ' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.594 09:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:25.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:25.167 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:25.167 Found net devices under 0000:86:00.0: cvl_0_0 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:25.167 Found net devices under 0000:86:00.1: cvl_0_1 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.167 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:29:25.168 00:29:25.168 --- 10.0.0.2 ping statistics --- 00:29:25.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.168 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:25.168 00:29:25.168 --- 10.0.0.1 ping statistics --- 00:29:25.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.168 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1296645 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1296645 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1296645 ']' 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.168 [2024-11-19 09:31:25.400774] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:25.168 [2024-11-19 09:31:25.401697] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:29:25.168 [2024-11-19 09:31:25.401731] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.168 [2024-11-19 09:31:25.462420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.168 [2024-11-19 09:31:25.508276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.168 [2024-11-19 09:31:25.508308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.168 [2024-11-19 09:31:25.508315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.168 [2024-11-19 09:31:25.508321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.168 [2024-11-19 09:31:25.508326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.168 [2024-11-19 09:31:25.509974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.168 [2024-11-19 09:31:25.510082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.168 [2024-11-19 09:31:25.510189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.168 [2024-11-19 09:31:25.510190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.168 [2024-11-19 09:31:25.578344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:25.168 [2024-11-19 09:31:25.579240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:25.168 [2024-11-19 09:31:25.579441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:25.168 [2024-11-19 09:31:25.579817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:25.168 [2024-11-19 09:31:25.579843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.168 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.169 [2024-11-19 09:31:25.642873] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.169 Malloc0 00:29:25.169 [2024-11-19 09:31:25.731154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1296731 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1296731 /var/tmp/bdevperf.sock 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1296731 ']' 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.169 { 00:29:25.169 "params": { 00:29:25.169 "name": "Nvme$subsystem", 00:29:25.169 "trtype": "$TEST_TRANSPORT", 00:29:25.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.169 "adrfam": "ipv4", 00:29:25.169 "trsvcid": "$NVMF_PORT", 00:29:25.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.169 "hdgst": ${hdgst:-false}, 00:29:25.169 "ddgst": ${ddgst:-false} 00:29:25.169 }, 00:29:25.169 "method": "bdev_nvme_attach_controller" 00:29:25.169 } 00:29:25.169 EOF 00:29:25.169 )") 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:25.169 09:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.169 "params": { 00:29:25.169 "name": "Nvme0", 00:29:25.169 "trtype": "tcp", 00:29:25.169 "traddr": "10.0.0.2", 00:29:25.169 "adrfam": "ipv4", 00:29:25.169 "trsvcid": "4420", 00:29:25.169 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.169 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:25.169 "hdgst": false, 00:29:25.169 "ddgst": false 00:29:25.169 }, 00:29:25.169 "method": "bdev_nvme_attach_controller" 00:29:25.169 }' 00:29:25.169 [2024-11-19 09:31:25.831215] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:29:25.169 [2024-11-19 09:31:25.831263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296731 ] 00:29:25.169 [2024-11-19 09:31:25.906464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.169 [2024-11-19 09:31:25.948030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.427 Running I/O for 10 seconds... 00:29:25.427 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.427 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:29:25.427 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:25.427 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:29:25.428 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.688 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.688 [2024-11-19 09:31:26.632018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.688 [2024-11-19 09:31:26.632062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.688 [2024-11-19 09:31:26.632072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.688 [2024-11-19 09:31:26.632079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.688 [2024-11-19 09:31:26.632087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.688 [2024-11-19 09:31:26.632094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.688 [2024-11-19 09:31:26.632101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:25.688 [2024-11-19 09:31:26.632108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.688 [2024-11-19 09:31:26.632120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1500 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.634994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.688 [2024-11-19 09:31:26.635234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.689 [2024-11-19 09:31:26.635241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.689 [2024-11-19 09:31:26.635247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.689 [2024-11-19 09:31:26.635253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b56fa0 is same with the state(6) to be set 00:29:25.689 [2024-11-19 09:31:26.635321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.689 [2024-11-19 09:31:26.635899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.689 [2024-11-19 09:31:26.635908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.635915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.635923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.635930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.635945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.635970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.635978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.635986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.635994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.690 [2024-11-19 09:31:26.636349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.636356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca820 is same with the state(6) to be set 00:29:25.690 [2024-11-19 09:31:26.637337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:25.690 task offset: 98304 on job bdev=Nvme0n1 fails 00:29:25.690 00:29:25.690 Latency(us) 00:29:25.690 [2024-11-19T08:31:26.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:25.690 Job: Nvme0n1 ended in about 0.41 seconds with error 00:29:25.690 Verification LBA range: start 0x0 length 0x400 00:29:25.690 Nvme0n1 : 0.41 1878.15 117.38 156.51 0.00 30610.02 3575.99 27696.08 00:29:25.690 [2024-11-19T08:31:26.749Z] =================================================================================================================== 00:29:25.690 [2024-11-19T08:31:26.749Z] Total : 1878.15 117.38 156.51 0.00 30610.02 3575.99 27696.08 00:29:25.690 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.690 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:25.690 [2024-11-19 09:31:26.639875] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:25.690 [2024-11-19 09:31:26.639898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1500 (9): Bad file descriptor 00:29:25.690 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.690 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:25.690 [2024-11-19 09:31:26.640846] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:25.690 [2024-11-19 09:31:26.640936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:25.690 [2024-11-19 09:31:26.640967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:25.690 [2024-11-19 09:31:26.640984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:25.690 [2024-11-19 09:31:26.640992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:25.690 [2024-11-19 09:31:26.640999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:25.690 [2024-11-19 09:31:26.641006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x22b1500 00:29:25.690 [2024-11-19 09:31:26.641025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1500 (9): Bad file descriptor 00:29:25.690 [2024-11-19 09:31:26.641036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:25.690 [2024-11-19 09:31:26.641044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:25.690 [2024-11-19 09:31:26.641053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:25.691 [2024-11-19 09:31:26.641061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:25.691 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.691 09:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1296731 00:29:26.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1296731) - No such process 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.624 { 00:29:26.624 "params": { 00:29:26.624 "name": "Nvme$subsystem", 00:29:26.624 "trtype": "$TEST_TRANSPORT", 00:29:26.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.624 "adrfam": "ipv4", 00:29:26.624 "trsvcid": "$NVMF_PORT", 00:29:26.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.624 "hdgst": ${hdgst:-false}, 00:29:26.624 "ddgst": ${ddgst:-false} 00:29:26.624 }, 00:29:26.624 "method": "bdev_nvme_attach_controller" 00:29:26.624 } 00:29:26.624 EOF 00:29:26.624 )") 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:26.624 09:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.624 "params": { 00:29:26.624 "name": "Nvme0", 00:29:26.624 "trtype": "tcp", 00:29:26.624 "traddr": "10.0.0.2", 00:29:26.624 "adrfam": "ipv4", 00:29:26.624 "trsvcid": "4420", 00:29:26.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:26.624 "hdgst": false, 00:29:26.624 "ddgst": false 00:29:26.624 }, 00:29:26.624 "method": "bdev_nvme_attach_controller" 00:29:26.624 }' 00:29:26.883 [2024-11-19 09:31:27.690119] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:29:26.883 [2024-11-19 09:31:27.690169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297155 ] 00:29:26.883 [2024-11-19 09:31:27.766888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.883 [2024-11-19 09:31:27.807155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.140 Running I/O for 1 seconds... 00:29:28.337 1984.00 IOPS, 124.00 MiB/s 00:29:28.337 Latency(us) 00:29:28.337 [2024-11-19T08:31:29.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.337 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:28.337 Verification LBA range: start 0x0 length 0x400 00:29:28.337 Nvme0n1 : 1.06 1937.43 121.09 0.00 0.00 31331.33 7180.47 47413.87 00:29:28.337 [2024-11-19T08:31:29.396Z] =================================================================================================================== 00:29:28.337 [2024-11-19T08:31:29.396Z] Total : 1937.43 121.09 0.00 0.00 31331.33 7180.47 47413.87 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.337 rmmod nvme_tcp 00:29:28.337 rmmod nvme_fabrics 00:29:28.337 rmmod nvme_keyring 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1296645 ']' 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1296645 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1296645 ']' 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1296645 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:28.337 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1296645 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1296645' 00:29:28.599 killing process with pid 1296645 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1296645 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1296645 00:29:28.599 [2024-11-19 09:31:29.573174] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.599 09:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.136 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.136 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:31.136 00:29:31.136 real 0m12.429s 00:29:31.136 user 0m18.464s 00:29:31.136 sys 0m6.370s 00:29:31.136 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:31.136 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:31.136 ************************************ 00:29:31.136 END TEST nvmf_host_management 00:29:31.136 ************************************ 00:29:31.136 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:31.137 ************************************ 00:29:31.137 START TEST nvmf_lvol 00:29:31.137 ************************************ 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:31.137 * Looking for test storage... 00:29:31.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:31.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.137 --rc genhtml_branch_coverage=1 00:29:31.137 --rc genhtml_function_coverage=1 00:29:31.137 --rc genhtml_legend=1 00:29:31.137 --rc geninfo_all_blocks=1 00:29:31.137 --rc geninfo_unexecuted_blocks=1 00:29:31.137 00:29:31.137 ' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:31.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.137 --rc genhtml_branch_coverage=1 00:29:31.137 --rc genhtml_function_coverage=1 00:29:31.137 --rc genhtml_legend=1 00:29:31.137 --rc geninfo_all_blocks=1 00:29:31.137 --rc geninfo_unexecuted_blocks=1 00:29:31.137 00:29:31.137 ' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:31.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.137 --rc genhtml_branch_coverage=1 00:29:31.137 --rc genhtml_function_coverage=1 00:29:31.137 --rc genhtml_legend=1 00:29:31.137 --rc geninfo_all_blocks=1 00:29:31.137 --rc geninfo_unexecuted_blocks=1 00:29:31.137 00:29:31.137 ' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:31.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.137 --rc genhtml_branch_coverage=1 00:29:31.137 --rc genhtml_function_coverage=1 00:29:31.137 --rc genhtml_legend=1 00:29:31.137 --rc geninfo_all_blocks=1 00:29:31.137 --rc geninfo_unexecuted_blocks=1 00:29:31.137 00:29:31.137 ' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.137 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.138 09:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.413 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:36.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:36.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:36.414 Found net devices under 0000:86:00.0: cvl_0_0 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:36.414 Found net devices under 0000:86:00.1: cvl_0_1 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:36.414 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.673 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:29:36.674 00:29:36.674 --- 10.0.0.2 ping statistics --- 00:29:36.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.674 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:29:36.674 00:29:36.674 --- 10.0.0.1 ping statistics --- 00:29:36.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.674 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:36.674 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1300904 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1300904 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1300904 ']' 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:36.932 09:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:36.933 [2024-11-19 09:31:37.809919] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:36.933 [2024-11-19 09:31:37.810864] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:29:36.933 [2024-11-19 09:31:37.810896] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.933 [2024-11-19 09:31:37.888972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.933 [2024-11-19 09:31:37.930743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.933 [2024-11-19 09:31:37.930782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.933 [2024-11-19 09:31:37.930789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.933 [2024-11-19 09:31:37.930795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.933 [2024-11-19 09:31:37.930801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.933 [2024-11-19 09:31:37.932071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.933 [2024-11-19 09:31:37.932179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.933 [2024-11-19 09:31:37.932180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.192 [2024-11-19 09:31:37.998882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.192 [2024-11-19 09:31:37.999732] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:37.192 [2024-11-19 09:31:37.999819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.192 [2024-11-19 09:31:38.000014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.192 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:37.192 [2024-11-19 09:31:38.236855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.451 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:37.710 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:37.710 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:37.710 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:37.710 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:37.967 09:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:38.225 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e985da78-7606-4d05-9f51-de5ebf696b4b 00:29:38.225 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e985da78-7606-4d05-9f51-de5ebf696b4b lvol 20 00:29:38.482 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=48d3c762-ca79-401c-a48e-b3114c651369 00:29:38.482 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:38.740 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 48d3c762-ca79-401c-a48e-b3114c651369 00:29:38.740 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:38.997 [2024-11-19 09:31:39.904728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.997 09:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.254 09:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1301185 00:29:39.254 09:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:39.254 09:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:40.188 09:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 48d3c762-ca79-401c-a48e-b3114c651369 MY_SNAPSHOT 00:29:40.445 09:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c35a3362-953b-4d06-9edd-9286ac874166 00:29:40.445 09:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 48d3c762-ca79-401c-a48e-b3114c651369 30 00:29:40.704 09:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c35a3362-953b-4d06-9edd-9286ac874166 MY_CLONE 00:29:40.962 09:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=efaacb3e-7147-4e91-b75a-0d9db30a3b26 00:29:40.962 09:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate efaacb3e-7147-4e91-b75a-0d9db30a3b26 00:29:41.527 09:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1301185 00:29:49.630 Initializing NVMe Controllers 00:29:49.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:49.630 Controller IO queue size 128, less than required. 00:29:49.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:49.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:49.630 Initialization complete. Launching workers. 00:29:49.630 ======================================================== 00:29:49.630 Latency(us) 00:29:49.630 Device Information : IOPS MiB/s Average min max 00:29:49.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12325.20 48.15 10387.77 1603.55 50681.75 00:29:49.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12243.70 47.83 10457.31 4049.07 47220.90 00:29:49.630 ======================================================== 00:29:49.631 Total : 24568.90 95.97 10422.42 1603.55 50681.75 00:29:49.631 00:29:49.631 09:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:49.887 09:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 48d3c762-ca79-401c-a48e-b3114c651369 00:29:50.144 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e985da78-7606-4d05-9f51-de5ebf696b4b 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.402 rmmod nvme_tcp 00:29:50.402 rmmod nvme_fabrics 00:29:50.402 rmmod nvme_keyring 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1300904 ']' 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1300904 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1300904 ']' 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1300904 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1300904 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1300904' 00:29:50.402 killing process with pid 1300904 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1300904 00:29:50.402 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1300904 00:29:50.661 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.662 09:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.199 00:29:53.199 real 0m21.900s 00:29:53.199 user 0m55.852s 00:29:53.199 sys 0m9.941s 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:53.199 ************************************ 00:29:53.199 END TEST nvmf_lvol 00:29:53.199 ************************************ 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:53.199 ************************************ 00:29:53.199 START TEST nvmf_lvs_grow 00:29:53.199 ************************************ 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:53.199 * Looking for test storage... 00:29:53.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:53.199 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:53.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.200 --rc genhtml_branch_coverage=1 00:29:53.200 --rc genhtml_function_coverage=1 00:29:53.200 --rc genhtml_legend=1 00:29:53.200 --rc geninfo_all_blocks=1 00:29:53.200 --rc geninfo_unexecuted_blocks=1 00:29:53.200 00:29:53.200 ' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:53.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.200 --rc genhtml_branch_coverage=1 00:29:53.200 --rc genhtml_function_coverage=1 00:29:53.200 --rc genhtml_legend=1 00:29:53.200 --rc geninfo_all_blocks=1 00:29:53.200 --rc geninfo_unexecuted_blocks=1 00:29:53.200 00:29:53.200 ' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:53.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.200 --rc genhtml_branch_coverage=1 00:29:53.200 --rc genhtml_function_coverage=1 00:29:53.200 --rc genhtml_legend=1 00:29:53.200 --rc geninfo_all_blocks=1 00:29:53.200 --rc geninfo_unexecuted_blocks=1 00:29:53.200 00:29:53.200 ' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:53.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.200 --rc genhtml_branch_coverage=1 00:29:53.200 --rc genhtml_function_coverage=1 00:29:53.200 --rc genhtml_legend=1 00:29:53.200 --rc geninfo_all_blocks=1 00:29:53.200 --rc geninfo_unexecuted_blocks=1 00:29:53.200 00:29:53.200 ' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:53.200 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.201 09:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.766 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:59.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:59.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:59.767 Found net devices under 0000:86:00.0: cvl_0_0 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:59.767 Found net devices under 0000:86:00.1: cvl_0_1 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.767 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:29:59.768 00:29:59.768 --- 10.0.0.2 ping statistics --- 00:29:59.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.768 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:29:59.768 00:29:59.768 --- 10.0.0.1 ping statistics --- 00:29:59.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.768 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1306553 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1306553 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1306553 ']' 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:59.768 09:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.768 [2024-11-19 09:31:59.914931] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:59.768 [2024-11-19 09:31:59.915881] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:29:59.768 [2024-11-19 09:31:59.915914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.768 [2024-11-19 09:31:59.997627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.768 [2024-11-19 09:32:00.047653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.768 [2024-11-19 09:32:00.047690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.768 [2024-11-19 09:32:00.047697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.768 [2024-11-19 09:32:00.047707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.768 [2024-11-19 09:32:00.047712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.768 [2024-11-19 09:32:00.048258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.768 [2024-11-19 09:32:00.118175] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:59.768 [2024-11-19 09:32:00.118390] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.768 09:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.027 [2024-11-19 09:32:00.992925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:00.027 ************************************ 00:30:00.027 START TEST lvs_grow_clean 00:30:00.027 ************************************ 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:00.027 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:00.286 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:00.286 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:00.544 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:00.544 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:00.544 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:00.803 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:00.803 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:00.803 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eb2b548f-2fbd-4837-84a5-b512da40e4aa lvol 150 00:30:01.061 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=df6b866d-0526-4c44-974e-a177841f8a88 00:30:01.061 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:01.061 09:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:01.061 [2024-11-19 09:32:02.080649] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:01.061 [2024-11-19 09:32:02.080782] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:01.061 true 00:30:01.061 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:01.061 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:01.319 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:01.319 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:01.578 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 df6b866d-0526-4c44-974e-a177841f8a88 00:30:01.836 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:01.836 [2024-11-19 09:32:02.853112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.836 09:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1307055 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1307055 /var/tmp/bdevperf.sock 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1307055 ']' 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:02.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:02.094 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:02.094 [2024-11-19 09:32:03.090146] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:02.094 [2024-11-19 09:32:03.090196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307055 ] 00:30:02.353 [2024-11-19 09:32:03.152371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.353 [2024-11-19 09:32:03.196134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.353 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:02.353 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:30:02.353 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:02.611 Nvme0n1 00:30:02.868 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:02.868 [ 00:30:02.868 { 00:30:02.868 "name": "Nvme0n1", 00:30:02.868 "aliases": [ 00:30:02.868 "df6b866d-0526-4c44-974e-a177841f8a88" 00:30:02.868 ], 00:30:02.868 "product_name": "NVMe disk", 00:30:02.868 "block_size": 4096, 00:30:02.868 "num_blocks": 38912, 00:30:02.868 "uuid": "df6b866d-0526-4c44-974e-a177841f8a88", 00:30:02.868 "numa_id": 1, 00:30:02.868 "assigned_rate_limits": { 00:30:02.868 "rw_ios_per_sec": 0, 00:30:02.868 "rw_mbytes_per_sec": 0, 00:30:02.868 "r_mbytes_per_sec": 0, 00:30:02.868 "w_mbytes_per_sec": 0 00:30:02.868 }, 00:30:02.868 "claimed": false, 00:30:02.868 "zoned": false, 00:30:02.868 "supported_io_types": { 00:30:02.868 "read": true, 00:30:02.868 "write": true, 00:30:02.868 "unmap": true, 00:30:02.868 "flush": true, 00:30:02.868 "reset": true, 00:30:02.868 "nvme_admin": true, 00:30:02.868 "nvme_io": true, 00:30:02.868 "nvme_io_md": false, 00:30:02.868 "write_zeroes": true, 00:30:02.868 "zcopy": false, 00:30:02.868 "get_zone_info": false, 00:30:02.868 "zone_management": false, 00:30:02.868 "zone_append": false, 00:30:02.868 "compare": true, 00:30:02.868 "compare_and_write": true, 00:30:02.868 "abort": true, 00:30:02.868 "seek_hole": false, 00:30:02.869 "seek_data": false, 00:30:02.869 "copy": true, 00:30:02.869 "nvme_iov_md": false 00:30:02.869 }, 00:30:02.869 "memory_domains": [ 00:30:02.869 { 00:30:02.869 "dma_device_id": "system", 00:30:02.869 "dma_device_type": 1 00:30:02.869 } 00:30:02.869 ], 00:30:02.869 "driver_specific": { 00:30:02.869 "nvme": [ 00:30:02.869 { 00:30:02.869 "trid": { 00:30:02.869 "trtype": "TCP", 00:30:02.869 "adrfam": "IPv4", 00:30:02.869 "traddr": "10.0.0.2", 00:30:02.869 "trsvcid": "4420", 00:30:02.869 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:02.869 }, 00:30:02.869 "ctrlr_data": { 00:30:02.869 "cntlid": 1, 00:30:02.869 "vendor_id": "0x8086", 00:30:02.869 "model_number": "SPDK bdev Controller", 00:30:02.869 "serial_number": "SPDK0", 00:30:02.869 "firmware_revision": "25.01", 00:30:02.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:02.869 "oacs": { 00:30:02.869 "security": 0, 00:30:02.869 "format": 0, 00:30:02.869 "firmware": 0, 00:30:02.869 "ns_manage": 0 00:30:02.869 }, 00:30:02.869 "multi_ctrlr": true, 00:30:02.869 "ana_reporting": false 00:30:02.869 }, 00:30:02.869 "vs": { 00:30:02.869 "nvme_version": "1.3" 00:30:02.869 }, 00:30:02.869 "ns_data": { 00:30:02.869 "id": 1, 00:30:02.869 "can_share": true 00:30:02.869 } 00:30:02.869 } 00:30:02.869 ], 00:30:02.869 "mp_policy": "active_passive" 00:30:02.869 } 00:30:02.869 } 00:30:02.869 ] 00:30:02.869 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1307275 00:30:02.869 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:02.869 09:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:03.127 Running I/O for 10 seconds... 00:30:04.118 Latency(us) 00:30:04.118 [2024-11-19T08:32:05.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:04.118 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:04.118 [2024-11-19T08:32:05.177Z] =================================================================================================================== 00:30:04.118 [2024-11-19T08:32:05.177Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:30:04.118 00:30:05.090 09:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:05.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.090 Nvme0n1 : 2.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:05.090 [2024-11-19T08:32:06.149Z] =================================================================================================================== 00:30:05.090 [2024-11-19T08:32:06.149Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:30:05.090 00:30:05.090 true 00:30:05.090 09:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:05.090 09:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:05.349 09:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:05.349 09:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:05.349 09:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1307275 00:30:05.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:05.915 Nvme0n1 : 3.00 22775.33 88.97 0.00 0.00 0.00 0.00 0.00 00:30:05.915 [2024-11-19T08:32:06.974Z] =================================================================================================================== 00:30:05.915 [2024-11-19T08:32:06.974Z] Total : 22775.33 88.97 0.00 0.00 0.00 0.00 0.00 00:30:05.915 00:30:07.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:07.295 Nvme0n1 : 4.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:07.295 [2024-11-19T08:32:08.354Z] =================================================================================================================== 00:30:07.295 [2024-11-19T08:32:08.354Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:07.295 00:30:08.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.229 Nvme0n1 : 5.00 22936.20 89.59 0.00 0.00 0.00 0.00 0.00 00:30:08.229 [2024-11-19T08:32:09.288Z] =================================================================================================================== 00:30:08.229 [2024-11-19T08:32:09.288Z] Total : 22936.20 89.59 0.00 0.00 0.00 0.00 0.00 00:30:08.229 00:30:09.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:09.164 Nvme0n1 : 6.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:09.164 [2024-11-19T08:32:10.223Z] =================================================================================================================== 00:30:09.164 [2024-11-19T08:32:10.223Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:30:09.164 00:30:10.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:10.097 Nvme0n1 : 7.00 22950.71 89.65 0.00 0.00 0.00 0.00 0.00 00:30:10.097 [2024-11-19T08:32:11.156Z] =================================================================================================================== 00:30:10.097 [2024-11-19T08:32:11.156Z] Total : 22950.71 89.65 0.00 0.00 0.00 0.00 0.00 00:30:10.097 00:30:11.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.031 Nvme0n1 : 8.00 23002.88 89.85 0.00 0.00 0.00 0.00 0.00 00:30:11.031 [2024-11-19T08:32:12.090Z] =================================================================================================================== 00:30:11.031 [2024-11-19T08:32:12.090Z] Total : 23002.88 89.85 0.00 0.00 0.00 0.00 0.00 00:30:11.031 00:30:11.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.965 Nvme0n1 : 9.00 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:30:11.965 [2024-11-19T08:32:13.024Z] =================================================================================================================== 00:30:11.965 [2024-11-19T08:32:13.024Z] Total : 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:30:11.965 00:30:13.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.342 Nvme0n1 : 10.00 23063.20 90.09 0.00 0.00 0.00 0.00 0.00 00:30:13.342 [2024-11-19T08:32:14.401Z] =================================================================================================================== 00:30:13.342 [2024-11-19T08:32:14.401Z] Total : 23063.20 90.09 0.00 0.00 0.00 0.00 0.00 00:30:13.342 00:30:13.342 00:30:13.342 Latency(us) 00:30:13.342 [2024-11-19T08:32:14.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.342 Nvme0n1 : 10.01 23063.92 90.09 0.00 0.00 5546.81 5043.42 26442.35 00:30:13.342 [2024-11-19T08:32:14.401Z] =================================================================================================================== 00:30:13.342 [2024-11-19T08:32:14.401Z] Total : 23063.92 90.09 0.00 0.00 5546.81 5043.42 26442.35 00:30:13.342 { 00:30:13.342 "results": [ 00:30:13.342 { 00:30:13.342 "job": "Nvme0n1", 00:30:13.342 "core_mask": "0x2", 00:30:13.342 "workload": "randwrite", 00:30:13.342 "status": "finished", 00:30:13.342 "queue_depth": 128, 00:30:13.342 "io_size": 4096, 00:30:13.342 "runtime": 10.005238, 00:30:13.342 "iops": 23063.91911916538, 00:30:13.342 "mibps": 90.09343405923977, 00:30:13.342 "io_failed": 0, 00:30:13.342 "io_timeout": 0, 00:30:13.342 "avg_latency_us": 5546.8136524603015, 00:30:13.342 "min_latency_us": 5043.422608695652, 00:30:13.342 "max_latency_us": 26442.351304347827 00:30:13.342 } 00:30:13.342 ], 00:30:13.342 "core_count": 1 00:30:13.342 } 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1307055 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1307055 ']' 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1307055 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1307055 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1307055' 00:30:13.342 killing process with pid 1307055 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1307055 00:30:13.342 Received shutdown signal, test time was about 10.000000 seconds 00:30:13.342 00:30:13.342 Latency(us) 00:30:13.342 [2024-11-19T08:32:14.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.342 [2024-11-19T08:32:14.401Z] =================================================================================================================== 00:30:13.342 [2024-11-19T08:32:14.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1307055 00:30:13.342 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.602 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:13.602 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:13.602 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:13.861 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:13.861 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:13.861 09:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:14.120 [2024-11-19 09:32:15.016709] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:14.120 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:14.379 request: 00:30:14.379 { 00:30:14.379 "uuid": "eb2b548f-2fbd-4837-84a5-b512da40e4aa", 00:30:14.379 "method": "bdev_lvol_get_lvstores", 00:30:14.379 "req_id": 1 00:30:14.379 } 00:30:14.379 Got JSON-RPC error response 00:30:14.379 response: 00:30:14.379 { 00:30:14.379 "code": -19, 00:30:14.379 "message": "No such device" 00:30:14.379 } 00:30:14.379 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:30:14.379 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:14.379 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:14.379 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:14.379 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:14.638 aio_bdev 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev df6b866d-0526-4c44-974e-a177841f8a88 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=df6b866d-0526-4c44-974e-a177841f8a88 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:14.638 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b df6b866d-0526-4c44-974e-a177841f8a88 -t 2000 00:30:14.898 [ 00:30:14.898 { 00:30:14.898 "name": "df6b866d-0526-4c44-974e-a177841f8a88", 00:30:14.898 "aliases": [ 00:30:14.898 "lvs/lvol" 00:30:14.898 ], 00:30:14.898 "product_name": "Logical Volume", 00:30:14.898 "block_size": 4096, 00:30:14.898 "num_blocks": 38912, 00:30:14.898 "uuid": "df6b866d-0526-4c44-974e-a177841f8a88", 00:30:14.898 "assigned_rate_limits": { 00:30:14.898 "rw_ios_per_sec": 0, 00:30:14.898 "rw_mbytes_per_sec": 0, 00:30:14.898 "r_mbytes_per_sec": 0, 00:30:14.898 "w_mbytes_per_sec": 0 00:30:14.898 }, 00:30:14.898 "claimed": false, 00:30:14.898 "zoned": false, 00:30:14.898 "supported_io_types": { 00:30:14.898 "read": true, 00:30:14.898 "write": true, 00:30:14.898 "unmap": true, 00:30:14.898 "flush": false, 00:30:14.898 "reset": true, 00:30:14.898 "nvme_admin": false, 00:30:14.898 "nvme_io": false, 00:30:14.898 "nvme_io_md": false, 00:30:14.898 "write_zeroes": true, 00:30:14.898 "zcopy": false, 00:30:14.898 "get_zone_info": false, 00:30:14.898 "zone_management": false, 00:30:14.898 "zone_append": false, 00:30:14.898 "compare": false, 00:30:14.898 "compare_and_write": false, 00:30:14.898 "abort": false, 00:30:14.898 "seek_hole": true, 00:30:14.898 "seek_data": true, 00:30:14.898 "copy": false, 00:30:14.898 "nvme_iov_md": false 00:30:14.898 }, 00:30:14.898 "driver_specific": { 00:30:14.898 "lvol": { 00:30:14.898 "lvol_store_uuid": "eb2b548f-2fbd-4837-84a5-b512da40e4aa", 00:30:14.898 "base_bdev": "aio_bdev", 00:30:14.898 "thin_provision": false, 00:30:14.898 "num_allocated_clusters": 38, 00:30:14.898 "snapshot": false, 00:30:14.898 "clone": false, 00:30:14.898 "esnap_clone": false 00:30:14.898 } 00:30:14.898 } 00:30:14.898 } 00:30:14.898 ] 00:30:14.898 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:30:14.898 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:14.898 09:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:15.157 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:15.157 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:15.157 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:15.157 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:15.157 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete df6b866d-0526-4c44-974e-a177841f8a88 00:30:15.416 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eb2b548f-2fbd-4837-84a5-b512da40e4aa 00:30:15.675 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:15.934 00:30:15.934 real 0m15.781s 00:30:15.934 user 0m15.406s 00:30:15.934 sys 0m1.447s 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:15.934 ************************************ 00:30:15.934 END TEST lvs_grow_clean 00:30:15.934 ************************************ 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:15.934 ************************************ 00:30:15.934 START TEST lvs_grow_dirty 00:30:15.934 ************************************ 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:15.934 09:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:16.193 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:16.193 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:16.452 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:16.452 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:16.452 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:16.711 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:16.711 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:16.711 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7a177020-74e3-4b1b-951f-6e76762cd47e lvol 150 00:30:16.711 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:16.711 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:16.711 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:16.970 [2024-11-19 09:32:17.896641] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:16.970 [2024-11-19 09:32:17.896773] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:16.970 true 00:30:16.970 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:16.970 09:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:17.229 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:17.229 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:17.488 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:17.488 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.747 [2024-11-19 09:32:18.693093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.747 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.006 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1309657 00:30:18.006 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:18.006 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1309657 /var/tmp/bdevperf.sock 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1309657 ']' 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:18.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:18.007 09:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:18.007 [2024-11-19 09:32:18.956455] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:18.007 [2024-11-19 09:32:18.956504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309657 ] 00:30:18.007 [2024-11-19 09:32:19.030646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.266 [2024-11-19 09:32:19.075433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.266 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:18.266 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:30:18.266 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:18.834 Nvme0n1 00:30:18.834 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:18.834 [ 00:30:18.834 { 00:30:18.834 "name": "Nvme0n1", 00:30:18.834 "aliases": [ 00:30:18.834 "fb79bf15-8693-40f3-8aac-36d96bd0e7b6" 00:30:18.834 ], 00:30:18.834 "product_name": "NVMe disk", 00:30:18.834 "block_size": 4096, 00:30:18.834 "num_blocks": 38912, 00:30:18.834 "uuid": "fb79bf15-8693-40f3-8aac-36d96bd0e7b6", 00:30:18.834 "numa_id": 1, 00:30:18.834 "assigned_rate_limits": { 00:30:18.834 "rw_ios_per_sec": 0, 00:30:18.834 "rw_mbytes_per_sec": 0, 00:30:18.834 "r_mbytes_per_sec": 0, 00:30:18.834 "w_mbytes_per_sec": 0 00:30:18.834 }, 00:30:18.834 "claimed": false, 00:30:18.834 "zoned": false, 00:30:18.834 "supported_io_types": { 00:30:18.834 "read": true, 00:30:18.834 "write": true, 00:30:18.834 "unmap": true, 00:30:18.834 "flush": true, 00:30:18.834 "reset": true, 00:30:18.834 "nvme_admin": true, 00:30:18.834 "nvme_io": true, 00:30:18.834 "nvme_io_md": false, 00:30:18.834 "write_zeroes": true, 00:30:18.834 "zcopy": false, 00:30:18.834 "get_zone_info": false, 00:30:18.834 "zone_management": false, 00:30:18.834 "zone_append": false, 00:30:18.834 "compare": true, 00:30:18.834 "compare_and_write": true, 00:30:18.834 "abort": true, 00:30:18.834 "seek_hole": false, 00:30:18.834 "seek_data": false, 00:30:18.834 "copy": true, 00:30:18.834 "nvme_iov_md": false 00:30:18.834 }, 00:30:18.834 "memory_domains": [ 00:30:18.834 { 00:30:18.834 "dma_device_id": "system", 00:30:18.834 "dma_device_type": 1 00:30:18.834 } 00:30:18.834 ], 00:30:18.834 "driver_specific": { 00:30:18.834 "nvme": [ 00:30:18.834 { 00:30:18.834 "trid": { 00:30:18.834 "trtype": "TCP", 00:30:18.834 "adrfam": "IPv4", 00:30:18.834 "traddr": "10.0.0.2", 00:30:18.834 "trsvcid": "4420", 00:30:18.834 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:18.834 }, 00:30:18.834 "ctrlr_data": { 00:30:18.834 "cntlid": 1, 00:30:18.834 "vendor_id": "0x8086", 00:30:18.834 "model_number": "SPDK bdev Controller", 00:30:18.834 "serial_number": "SPDK0", 00:30:18.834 "firmware_revision": "25.01", 00:30:18.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.834 "oacs": { 00:30:18.834 "security": 0, 00:30:18.834 "format": 0, 00:30:18.834 "firmware": 0, 00:30:18.834 "ns_manage": 0 00:30:18.834 }, 00:30:18.834 "multi_ctrlr": true, 00:30:18.834 "ana_reporting": false 00:30:18.834 }, 00:30:18.834 "vs": { 00:30:18.834 "nvme_version": "1.3" 00:30:18.834 }, 00:30:18.834 "ns_data": { 00:30:18.834 "id": 1, 00:30:18.834 "can_share": true 00:30:18.834 } 00:30:18.834 } 00:30:18.834 ], 00:30:18.834 "mp_policy": "active_passive" 00:30:18.834 } 00:30:18.834 } 00:30:18.834 ] 00:30:18.834 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1309883 00:30:18.834 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:18.834 09:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:19.093 Running I/O for 10 seconds... 00:30:20.029 Latency(us) 00:30:20.029 [2024-11-19T08:32:21.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.029 Nvme0n1 : 1.00 21971.00 85.82 0.00 0.00 0.00 0.00 0.00 00:30:20.029 [2024-11-19T08:32:21.088Z] =================================================================================================================== 00:30:20.029 [2024-11-19T08:32:21.088Z] Total : 21971.00 85.82 0.00 0.00 0.00 0.00 0.00 00:30:20.029 00:30:20.966 09:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:20.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.966 Nvme0n1 : 2.00 22487.50 87.84 0.00 0.00 0.00 0.00 0.00 00:30:20.966 [2024-11-19T08:32:22.025Z] =================================================================================================================== 00:30:20.966 [2024-11-19T08:32:22.025Z] Total : 22487.50 87.84 0.00 0.00 0.00 0.00 0.00 00:30:20.966 00:30:20.966 true 00:30:21.225 09:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:21.225 09:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:21.225 09:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:21.225 09:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:21.225 09:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1309883 00:30:22.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.162 Nvme0n1 : 3.00 22696.33 88.66 0.00 0.00 0.00 0.00 0.00 00:30:22.162 [2024-11-19T08:32:23.221Z] =================================================================================================================== 00:30:22.162 [2024-11-19T08:32:23.221Z] Total : 22696.33 88.66 0.00 0.00 0.00 0.00 0.00 00:30:22.162 00:30:23.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:23.100 Nvme0n1 : 4.00 22809.25 89.10 0.00 0.00 0.00 0.00 0.00 00:30:23.100 [2024-11-19T08:32:24.159Z] =================================================================================================================== 00:30:23.100 [2024-11-19T08:32:24.159Z] Total : 22809.25 89.10 0.00 0.00 0.00 0.00 0.00 00:30:23.100 00:30:24.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.037 Nvme0n1 : 5.00 22895.60 89.44 0.00 0.00 0.00 0.00 0.00 00:30:24.037 [2024-11-19T08:32:25.096Z] =================================================================================================================== 00:30:24.037 [2024-11-19T08:32:25.096Z] Total : 22895.60 89.44 0.00 0.00 0.00 0.00 0.00 00:30:24.037 00:30:24.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:24.974 Nvme0n1 : 6.00 22953.17 89.66 0.00 0.00 0.00 0.00 0.00 00:30:24.974 [2024-11-19T08:32:26.033Z] =================================================================================================================== 00:30:24.974 [2024-11-19T08:32:26.033Z] Total : 22953.17 89.66 0.00 0.00 0.00 0.00 0.00 00:30:24.974 00:30:25.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:25.912 Nvme0n1 : 7.00 22994.29 89.82 0.00 0.00 0.00 0.00 0.00 00:30:25.912 [2024-11-19T08:32:26.971Z] =================================================================================================================== 00:30:25.912 [2024-11-19T08:32:26.971Z] Total : 22994.29 89.82 0.00 0.00 0.00 0.00 0.00 00:30:25.912 00:30:27.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.290 Nvme0n1 : 8.00 23041.00 90.00 0.00 0.00 0.00 0.00 0.00 00:30:27.290 [2024-11-19T08:32:28.349Z] =================================================================================================================== 00:30:27.290 [2024-11-19T08:32:28.349Z] Total : 23041.00 90.00 0.00 0.00 0.00 0.00 0.00 00:30:27.290 00:30:28.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.226 Nvme0n1 : 9.00 23063.22 90.09 0.00 0.00 0.00 0.00 0.00 00:30:28.226 [2024-11-19T08:32:29.285Z] =================================================================================================================== 00:30:28.226 [2024-11-19T08:32:29.285Z] Total : 23063.22 90.09 0.00 0.00 0.00 0.00 0.00 00:30:28.226 00:30:29.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.164 Nvme0n1 : 10.00 23084.40 90.17 0.00 0.00 0.00 0.00 0.00 00:30:29.164 [2024-11-19T08:32:30.223Z] =================================================================================================================== 00:30:29.164 [2024-11-19T08:32:30.223Z] Total : 23084.40 90.17 0.00 0.00 0.00 0.00 0.00 00:30:29.164 00:30:29.164 00:30:29.164 Latency(us) 00:30:29.164 [2024-11-19T08:32:30.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.164 Nvme0n1 : 10.00 23082.47 90.17 0.00 0.00 5542.31 3276.80 26328.38 00:30:29.164 [2024-11-19T08:32:30.223Z] =================================================================================================================== 00:30:29.164 [2024-11-19T08:32:30.223Z] Total : 23082.47 90.17 0.00 0.00 5542.31 3276.80 26328.38 00:30:29.164 { 00:30:29.164 "results": [ 00:30:29.164 { 00:30:29.164 "job": "Nvme0n1", 00:30:29.164 "core_mask": "0x2", 00:30:29.164 "workload": "randwrite", 00:30:29.164 "status": "finished", 00:30:29.164 "queue_depth": 128, 00:30:29.164 "io_size": 4096, 00:30:29.164 "runtime": 10.004909, 00:30:29.164 "iops": 23082.468816058197, 00:30:29.164 "mibps": 90.16589381272733, 00:30:29.164 "io_failed": 0, 00:30:29.164 "io_timeout": 0, 00:30:29.164 "avg_latency_us": 5542.305173118175, 00:30:29.164 "min_latency_us": 3276.8, 00:30:29.164 "max_latency_us": 26328.375652173912 00:30:29.164 } 00:30:29.164 ], 00:30:29.164 "core_count": 1 00:30:29.164 } 00:30:29.164 09:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1309657 00:30:29.164 09:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1309657 ']' 00:30:29.164 09:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1309657 00:30:29.164 09:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:30:29.164 09:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:29.164 09:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309657 00:30:29.164 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:29.164 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:29.164 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309657' 00:30:29.164 killing process with pid 1309657 00:30:29.164 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1309657 00:30:29.164 Received shutdown signal, test time was about 10.000000 seconds 00:30:29.165 00:30:29.165 Latency(us) 00:30:29.165 [2024-11-19T08:32:30.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.165 [2024-11-19T08:32:30.224Z] =================================================================================================================== 00:30:29.165 [2024-11-19T08:32:30.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.165 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1309657 00:30:29.165 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:29.425 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.683 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:29.683 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1306553 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1306553 00:30:29.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1306553 Killed "${NVMF_APP[@]}" "$@" 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1311506 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1311506 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1311506 ']' 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:29.942 09:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:29.942 [2024-11-19 09:32:30.902644] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:29.942 [2024-11-19 09:32:30.903616] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:29.942 [2024-11-19 09:32:30.903653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.942 [2024-11-19 09:32:30.985101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.202 [2024-11-19 09:32:31.027247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.202 [2024-11-19 09:32:31.027284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.202 [2024-11-19 09:32:31.027291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.202 [2024-11-19 09:32:31.027297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.202 [2024-11-19 09:32:31.027302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.202 [2024-11-19 09:32:31.027837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.202 [2024-11-19 09:32:31.095293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:30.202 [2024-11-19 09:32:31.095503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.202 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:30.461 [2024-11-19 09:32:31.341179] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:30.461 [2024-11-19 09:32:31.341373] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:30.461 [2024-11-19 09:32:31.341456] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:30.461 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:30.720 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fb79bf15-8693-40f3-8aac-36d96bd0e7b6 -t 2000 00:30:30.720 [ 00:30:30.720 { 00:30:30.720 "name": "fb79bf15-8693-40f3-8aac-36d96bd0e7b6", 00:30:30.720 "aliases": [ 00:30:30.720 "lvs/lvol" 00:30:30.720 ], 00:30:30.720 "product_name": "Logical Volume", 00:30:30.720 "block_size": 4096, 00:30:30.720 "num_blocks": 38912, 00:30:30.720 "uuid": "fb79bf15-8693-40f3-8aac-36d96bd0e7b6", 00:30:30.720 "assigned_rate_limits": { 00:30:30.720 "rw_ios_per_sec": 0, 00:30:30.720 "rw_mbytes_per_sec": 0, 00:30:30.720 "r_mbytes_per_sec": 0, 00:30:30.720 "w_mbytes_per_sec": 0 00:30:30.720 }, 00:30:30.720 "claimed": false, 00:30:30.720 "zoned": false, 00:30:30.720 "supported_io_types": { 00:30:30.720 "read": true, 00:30:30.720 "write": true, 00:30:30.720 "unmap": true, 00:30:30.720 "flush": false, 00:30:30.720 "reset": true, 00:30:30.720 "nvme_admin": false, 00:30:30.720 "nvme_io": false, 00:30:30.720 "nvme_io_md": false, 00:30:30.720 "write_zeroes": true, 00:30:30.720 "zcopy": false, 00:30:30.720 "get_zone_info": false, 00:30:30.720 "zone_management": false, 00:30:30.720 "zone_append": false, 00:30:30.720 "compare": false, 00:30:30.720 "compare_and_write": false, 00:30:30.720 "abort": false, 00:30:30.720 "seek_hole": true, 00:30:30.720 "seek_data": true, 00:30:30.720 "copy": false, 00:30:30.720 "nvme_iov_md": false 00:30:30.720 }, 00:30:30.720 "driver_specific": { 00:30:30.720 "lvol": { 00:30:30.720 "lvol_store_uuid": "7a177020-74e3-4b1b-951f-6e76762cd47e", 00:30:30.720 "base_bdev": "aio_bdev", 00:30:30.720 "thin_provision": false, 00:30:30.720 "num_allocated_clusters": 38, 00:30:30.720 "snapshot": false, 00:30:30.720 "clone": false, 00:30:30.720 "esnap_clone": false 00:30:30.720 } 00:30:30.720 } 00:30:30.720 } 00:30:30.720 ] 00:30:30.720 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:30:30.720 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:30.720 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:30.979 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:30.980 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:30.980 09:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:31.238 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:31.238 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:31.497 [2024-11-19 09:32:32.336299] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:31.497 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:31.756 request: 00:30:31.756 { 00:30:31.756 "uuid": "7a177020-74e3-4b1b-951f-6e76762cd47e", 00:30:31.756 "method": "bdev_lvol_get_lvstores", 00:30:31.756 "req_id": 1 00:30:31.756 } 00:30:31.756 Got JSON-RPC error response 00:30:31.756 response: 00:30:31.756 { 00:30:31.756 "code": -19, 00:30:31.756 "message": "No such device" 00:30:31.756 } 00:30:31.756 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:31.757 aio_bdev 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:30:31.757 09:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:32.016 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fb79bf15-8693-40f3-8aac-36d96bd0e7b6 -t 2000 00:30:32.274 [ 00:30:32.274 { 00:30:32.274 "name": "fb79bf15-8693-40f3-8aac-36d96bd0e7b6", 00:30:32.274 "aliases": [ 00:30:32.274 "lvs/lvol" 00:30:32.274 ], 00:30:32.274 "product_name": "Logical Volume", 00:30:32.274 "block_size": 4096, 00:30:32.274 "num_blocks": 38912, 00:30:32.274 "uuid": "fb79bf15-8693-40f3-8aac-36d96bd0e7b6", 00:30:32.274 "assigned_rate_limits": { 00:30:32.274 "rw_ios_per_sec": 0, 00:30:32.274 "rw_mbytes_per_sec": 0, 00:30:32.274 "r_mbytes_per_sec": 0, 00:30:32.274 "w_mbytes_per_sec": 0 00:30:32.274 }, 00:30:32.274 "claimed": false, 00:30:32.274 "zoned": false, 00:30:32.274 "supported_io_types": { 00:30:32.274 "read": true, 00:30:32.274 "write": true, 00:30:32.274 "unmap": true, 00:30:32.274 "flush": false, 00:30:32.274 "reset": true, 00:30:32.275 "nvme_admin": false, 00:30:32.275 "nvme_io": false, 00:30:32.275 "nvme_io_md": false, 00:30:32.275 "write_zeroes": true, 00:30:32.275 "zcopy": false, 00:30:32.275 "get_zone_info": false, 00:30:32.275 "zone_management": false, 00:30:32.275 "zone_append": false, 00:30:32.275 "compare": false, 00:30:32.275 "compare_and_write": false, 00:30:32.275 "abort": false, 00:30:32.275 "seek_hole": true, 00:30:32.275 "seek_data": true, 00:30:32.275 "copy": false, 00:30:32.275 "nvme_iov_md": false 00:30:32.275 }, 00:30:32.275 "driver_specific": { 00:30:32.275 "lvol": { 00:30:32.275 "lvol_store_uuid": "7a177020-74e3-4b1b-951f-6e76762cd47e", 00:30:32.275 "base_bdev": "aio_bdev", 00:30:32.275 "thin_provision": false, 00:30:32.275 "num_allocated_clusters": 38, 00:30:32.275 "snapshot": false, 00:30:32.275 "clone": false, 00:30:32.275 "esnap_clone": false 00:30:32.275 } 00:30:32.275 } 00:30:32.275 } 00:30:32.275 ] 00:30:32.275 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:30:32.275 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:32.275 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:32.533 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:32.533 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:32.533 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:32.793 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:32.793 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb79bf15-8693-40f3-8aac-36d96bd0e7b6 00:30:32.793 09:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a177020-74e3-4b1b-951f-6e76762cd47e 00:30:33.052 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:33.311 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:33.311 00:30:33.311 real 0m17.314s 00:30:33.311 user 0m34.829s 00:30:33.311 sys 0m3.783s 00:30:33.311 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.311 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:33.311 ************************************ 00:30:33.311 END TEST lvs_grow_dirty 00:30:33.312 ************************************ 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:33.312 nvmf_trace.0 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:33.312 rmmod nvme_tcp 00:30:33.312 rmmod nvme_fabrics 00:30:33.312 rmmod nvme_keyring 00:30:33.312 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1311506 ']' 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1311506 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1311506 ']' 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1311506 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1311506 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1311506' 00:30:33.571 killing process with pid 1311506 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1311506 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1311506 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.571 09:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.110 00:30:36.110 real 0m42.953s 00:30:36.110 user 0m52.955s 00:30:36.110 sys 0m10.148s 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:36.110 ************************************ 00:30:36.110 END TEST nvmf_lvs_grow 00:30:36.110 ************************************ 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.110 ************************************ 00:30:36.110 START TEST nvmf_bdev_io_wait 00:30:36.110 ************************************ 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:36.110 * Looking for test storage... 00:30:36.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:36.110 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.111 --rc genhtml_branch_coverage=1 00:30:36.111 --rc genhtml_function_coverage=1 00:30:36.111 --rc genhtml_legend=1 00:30:36.111 --rc geninfo_all_blocks=1 00:30:36.111 --rc geninfo_unexecuted_blocks=1 00:30:36.111 00:30:36.111 ' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.111 --rc genhtml_branch_coverage=1 00:30:36.111 --rc genhtml_function_coverage=1 00:30:36.111 --rc genhtml_legend=1 00:30:36.111 --rc geninfo_all_blocks=1 00:30:36.111 --rc geninfo_unexecuted_blocks=1 00:30:36.111 00:30:36.111 ' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.111 --rc genhtml_branch_coverage=1 00:30:36.111 --rc genhtml_function_coverage=1 00:30:36.111 --rc genhtml_legend=1 00:30:36.111 --rc geninfo_all_blocks=1 00:30:36.111 --rc geninfo_unexecuted_blocks=1 00:30:36.111 00:30:36.111 ' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:36.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.111 --rc genhtml_branch_coverage=1 00:30:36.111 --rc genhtml_function_coverage=1 00:30:36.111 --rc genhtml_legend=1 00:30:36.111 --rc geninfo_all_blocks=1 00:30:36.111 --rc geninfo_unexecuted_blocks=1 00:30:36.111 00:30:36.111 ' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:36.111 09:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.683 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:42.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:42.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:42.684 Found net devices under 0000:86:00.0: cvl_0_0 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:42.684 Found net devices under 0000:86:00.1: cvl_0_1 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:30:42.684 00:30:42.684 --- 10.0.0.2 ping statistics --- 00:30:42.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.684 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:30:42.684 00:30:42.684 --- 10.0.0.1 ping statistics --- 00:30:42.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.684 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1315709 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1315709 00:30:42.684 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:42.685 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1315709 ']' 00:30:42.685 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.685 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:42.685 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.685 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:42.685 09:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 [2024-11-19 09:32:42.922527] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.685 [2024-11-19 09:32:42.923476] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:42.685 [2024-11-19 09:32:42.923510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.685 [2024-11-19 09:32:43.003965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.685 [2024-11-19 09:32:43.047786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.685 [2024-11-19 09:32:43.047824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.685 [2024-11-19 09:32:43.047831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.685 [2024-11-19 09:32:43.047837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.685 [2024-11-19 09:32:43.047842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.685 [2024-11-19 09:32:43.049400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.685 [2024-11-19 09:32:43.049514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.685 [2024-11-19 09:32:43.049529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.685 [2024-11-19 09:32:43.049536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.685 [2024-11-19 09:32:43.049938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 [2024-11-19 09:32:43.183436] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.685 [2024-11-19 09:32:43.184235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:42.685 [2024-11-19 09:32:43.184329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:42.685 [2024-11-19 09:32:43.184475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 [2024-11-19 09:32:43.194460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 Malloc0 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 [2024-11-19 09:32:43.266523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1315795 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1315797 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.685 { 00:30:42.685 "params": { 00:30:42.685 "name": "Nvme$subsystem", 00:30:42.685 "trtype": "$TEST_TRANSPORT", 00:30:42.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.685 "adrfam": "ipv4", 00:30:42.685 "trsvcid": "$NVMF_PORT", 00:30:42.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.685 "hdgst": ${hdgst:-false}, 00:30:42.685 "ddgst": ${ddgst:-false} 00:30:42.685 }, 00:30:42.685 "method": "bdev_nvme_attach_controller" 00:30:42.685 } 00:30:42.685 EOF 00:30:42.685 )") 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1315799 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.685 { 00:30:42.685 "params": { 00:30:42.685 "name": "Nvme$subsystem", 00:30:42.685 "trtype": "$TEST_TRANSPORT", 00:30:42.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.685 "adrfam": "ipv4", 00:30:42.685 "trsvcid": "$NVMF_PORT", 00:30:42.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.685 "hdgst": ${hdgst:-false}, 00:30:42.685 "ddgst": ${ddgst:-false} 00:30:42.685 }, 00:30:42.685 "method": "bdev_nvme_attach_controller" 00:30:42.685 } 00:30:42.685 EOF 00:30:42.685 )") 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1315802 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:42.685 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.686 { 00:30:42.686 "params": { 00:30:42.686 "name": "Nvme$subsystem", 00:30:42.686 "trtype": "$TEST_TRANSPORT", 00:30:42.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.686 "adrfam": "ipv4", 00:30:42.686 "trsvcid": "$NVMF_PORT", 00:30:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.686 "hdgst": ${hdgst:-false}, 00:30:42.686 "ddgst": ${ddgst:-false} 00:30:42.686 }, 00:30:42.686 "method": "bdev_nvme_attach_controller" 00:30:42.686 } 00:30:42.686 EOF 00:30:42.686 )") 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.686 { 00:30:42.686 "params": { 00:30:42.686 "name": "Nvme$subsystem", 00:30:42.686 "trtype": "$TEST_TRANSPORT", 00:30:42.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.686 "adrfam": "ipv4", 00:30:42.686 "trsvcid": "$NVMF_PORT", 00:30:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.686 "hdgst": ${hdgst:-false}, 00:30:42.686 "ddgst": ${ddgst:-false} 00:30:42.686 }, 00:30:42.686 "method": "bdev_nvme_attach_controller" 00:30:42.686 } 00:30:42.686 EOF 00:30:42.686 )") 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1315795 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.686 "params": { 00:30:42.686 "name": "Nvme1", 00:30:42.686 "trtype": "tcp", 00:30:42.686 "traddr": "10.0.0.2", 00:30:42.686 "adrfam": "ipv4", 00:30:42.686 "trsvcid": "4420", 00:30:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.686 "hdgst": false, 00:30:42.686 "ddgst": false 00:30:42.686 }, 00:30:42.686 "method": "bdev_nvme_attach_controller" 00:30:42.686 }' 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.686 "params": { 00:30:42.686 "name": "Nvme1", 00:30:42.686 "trtype": "tcp", 00:30:42.686 "traddr": "10.0.0.2", 00:30:42.686 "adrfam": "ipv4", 00:30:42.686 "trsvcid": "4420", 00:30:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.686 "hdgst": false, 00:30:42.686 "ddgst": false 00:30:42.686 }, 00:30:42.686 "method": "bdev_nvme_attach_controller" 00:30:42.686 }' 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.686 "params": { 00:30:42.686 "name": "Nvme1", 00:30:42.686 "trtype": "tcp", 00:30:42.686 "traddr": "10.0.0.2", 00:30:42.686 "adrfam": "ipv4", 00:30:42.686 "trsvcid": "4420", 00:30:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.686 "hdgst": false, 00:30:42.686 "ddgst": false 00:30:42.686 }, 00:30:42.686 "method": "bdev_nvme_attach_controller" 00:30:42.686 }' 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:42.686 09:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.686 "params": { 00:30:42.686 "name": "Nvme1", 00:30:42.686 "trtype": "tcp", 00:30:42.686 "traddr": "10.0.0.2", 00:30:42.686 "adrfam": "ipv4", 00:30:42.686 "trsvcid": "4420", 00:30:42.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:42.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:42.686 "hdgst": false, 00:30:42.686 "ddgst": false 00:30:42.686 }, 00:30:42.686 "method": "bdev_nvme_attach_controller" 00:30:42.686 }' 00:30:42.686 [2024-11-19 09:32:43.317722] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:42.686 [2024-11-19 09:32:43.317768] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:42.686 [2024-11-19 09:32:43.318307] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:42.686 [2024-11-19 09:32:43.318307] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:42.686 [2024-11-19 09:32:43.318358] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 09:32:43.318358] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:42.686 --proc-type=auto ] 00:30:42.686 [2024-11-19 09:32:43.324369] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:42.686 [2024-11-19 09:32:43.324411] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:42.686 [2024-11-19 09:32:43.541305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.686 [2024-11-19 09:32:43.584714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.686 [2024-11-19 09:32:43.585980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:42.686 [2024-11-19 09:32:43.622759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:42.686 [2024-11-19 09:32:43.683222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.945 [2024-11-19 09:32:43.737047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:42.945 [2024-11-19 09:32:43.739886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.945 [2024-11-19 09:32:43.782709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:42.945 Running I/O for 1 seconds... 00:30:42.945 Running I/O for 1 seconds... 00:30:42.945 Running I/O for 1 seconds... 00:30:42.945 Running I/O for 1 seconds... 00:30:43.881 12018.00 IOPS, 46.95 MiB/s 00:30:43.881 Latency(us) 00:30:43.881 [2024-11-19T08:32:44.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.881 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:43.881 Nvme1n1 : 1.01 12079.56 47.19 0.00 0.00 10561.60 3846.68 12822.26 00:30:43.881 [2024-11-19T08:32:44.940Z] =================================================================================================================== 00:30:43.881 [2024-11-19T08:32:44.940Z] Total : 12079.56 47.19 0.00 0.00 10561.60 3846.68 12822.26 00:30:43.881 246112.00 IOPS, 961.38 MiB/s 00:30:43.881 Latency(us) 00:30:43.881 [2024-11-19T08:32:44.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.881 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:43.881 Nvme1n1 : 1.00 245726.84 959.87 0.00 0.00 517.64 233.29 1538.67 00:30:43.881 [2024-11-19T08:32:44.940Z] =================================================================================================================== 00:30:43.881 [2024-11-19T08:32:44.940Z] Total : 245726.84 959.87 0.00 0.00 517.64 233.29 1538.67 00:30:43.881 11208.00 IOPS, 43.78 MiB/s 00:30:43.881 Latency(us) 00:30:43.881 [2024-11-19T08:32:44.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.881 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:43.881 Nvme1n1 : 1.01 11285.93 44.09 0.00 0.00 11309.44 4046.14 14360.93 00:30:43.881 [2024-11-19T08:32:44.941Z] =================================================================================================================== 00:30:43.882 [2024-11-19T08:32:44.941Z] Total : 11285.93 44.09 0.00 0.00 11309.44 4046.14 14360.93 00:30:44.140 09:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1315797 00:30:44.140 10546.00 IOPS, 41.20 MiB/s 00:30:44.140 Latency(us) 00:30:44.140 [2024-11-19T08:32:45.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.140 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:44.140 Nvme1n1 : 1.01 10622.57 41.49 0.00 0.00 12017.58 3989.15 18122.13 00:30:44.140 [2024-11-19T08:32:45.199Z] =================================================================================================================== 00:30:44.140 [2024-11-19T08:32:45.199Z] Total : 10622.57 41.49 0.00 0.00 12017.58 3989.15 18122.13 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1315799 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1315802 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.140 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.140 rmmod nvme_tcp 00:30:44.140 rmmod nvme_fabrics 00:30:44.141 rmmod nvme_keyring 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1315709 ']' 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1315709 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1315709 ']' 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1315709 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:44.141 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1315709 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1315709' 00:30:44.400 killing process with pid 1315709 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1315709 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1315709 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.400 09:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.941 00:30:46.941 real 0m10.722s 00:30:46.941 user 0m14.569s 00:30:46.941 sys 0m6.667s 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:46.941 ************************************ 00:30:46.941 END TEST nvmf_bdev_io_wait 00:30:46.941 ************************************ 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:46.941 ************************************ 00:30:46.941 START TEST nvmf_queue_depth 00:30:46.941 ************************************ 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:46.941 * Looking for test storage... 00:30:46.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:46.941 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.942 --rc genhtml_branch_coverage=1 00:30:46.942 --rc genhtml_function_coverage=1 00:30:46.942 --rc genhtml_legend=1 00:30:46.942 --rc geninfo_all_blocks=1 00:30:46.942 --rc geninfo_unexecuted_blocks=1 00:30:46.942 00:30:46.942 ' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.942 --rc genhtml_branch_coverage=1 00:30:46.942 --rc genhtml_function_coverage=1 00:30:46.942 --rc genhtml_legend=1 00:30:46.942 --rc geninfo_all_blocks=1 00:30:46.942 --rc geninfo_unexecuted_blocks=1 00:30:46.942 00:30:46.942 ' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.942 --rc genhtml_branch_coverage=1 00:30:46.942 --rc genhtml_function_coverage=1 00:30:46.942 --rc genhtml_legend=1 00:30:46.942 --rc geninfo_all_blocks=1 00:30:46.942 --rc geninfo_unexecuted_blocks=1 00:30:46.942 00:30:46.942 ' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:46.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.942 --rc genhtml_branch_coverage=1 00:30:46.942 --rc genhtml_function_coverage=1 00:30:46.942 --rc genhtml_legend=1 00:30:46.942 --rc geninfo_all_blocks=1 00:30:46.942 --rc geninfo_unexecuted_blocks=1 00:30:46.942 00:30:46.942 ' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:46.942 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.943 09:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:53.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:53.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:53.517 Found net devices under 0000:86:00.0: cvl_0_0 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:53.517 Found net devices under 0000:86:00.1: cvl_0_1 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:53.517 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:53.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:53.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:30:53.518 00:30:53.518 --- 10.0.0.2 ping statistics --- 00:30:53.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.518 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:53.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:53.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:30:53.518 00:30:53.518 --- 10.0.0.1 ping statistics --- 00:30:53.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:53.518 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1319572 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1319572 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1319572 ']' 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:53.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 [2024-11-19 09:32:53.685880] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:53.518 [2024-11-19 09:32:53.686912] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:53.518 [2024-11-19 09:32:53.686961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:53.518 [2024-11-19 09:32:53.767631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.518 [2024-11-19 09:32:53.810190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:53.518 [2024-11-19 09:32:53.810224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:53.518 [2024-11-19 09:32:53.810232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:53.518 [2024-11-19 09:32:53.810238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:53.518 [2024-11-19 09:32:53.810243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:53.518 [2024-11-19 09:32:53.810760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.518 [2024-11-19 09:32:53.876480] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:53.518 [2024-11-19 09:32:53.876698] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 [2024-11-19 09:32:53.943438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 Malloc0 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.518 09:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 [2024-11-19 09:32:54.011542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1319598 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1319598 /var/tmp/bdevperf.sock 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1319598 ']' 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:53.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:53.518 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.518 [2024-11-19 09:32:54.064067] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:30:53.519 [2024-11-19 09:32:54.064108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319598 ] 00:30:53.519 [2024-11-19 09:32:54.121001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.519 [2024-11-19 09:32:54.164161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:53.519 NVMe0n1 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.519 09:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.778 Running I/O for 10 seconds... 00:30:55.811 11272.00 IOPS, 44.03 MiB/s [2024-11-19T08:32:57.808Z] 11779.50 IOPS, 46.01 MiB/s [2024-11-19T08:32:58.744Z] 11947.67 IOPS, 46.67 MiB/s [2024-11-19T08:32:59.681Z] 12039.75 IOPS, 47.03 MiB/s [2024-11-19T08:33:00.619Z] 12091.80 IOPS, 47.23 MiB/s [2024-11-19T08:33:01.997Z] 12115.83 IOPS, 47.33 MiB/s [2024-11-19T08:33:02.933Z] 12142.43 IOPS, 47.43 MiB/s [2024-11-19T08:33:03.869Z] 12158.88 IOPS, 47.50 MiB/s [2024-11-19T08:33:04.805Z] 12175.33 IOPS, 47.56 MiB/s [2024-11-19T08:33:04.805Z] 12180.60 IOPS, 47.58 MiB/s 00:31:03.746 Latency(us) 00:31:03.746 [2024-11-19T08:33:04.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.746 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:03.746 Verification LBA range: start 0x0 length 0x4000 00:31:03.746 NVMe0n1 : 10.06 12207.39 47.69 0.00 0.00 83616.55 19603.81 53796.51 00:31:03.746 [2024-11-19T08:33:04.805Z] =================================================================================================================== 00:31:03.746 [2024-11-19T08:33:04.805Z] Total : 12207.39 47.69 0.00 0.00 83616.55 19603.81 53796.51 00:31:03.746 { 00:31:03.746 "results": [ 00:31:03.746 { 00:31:03.746 "job": "NVMe0n1", 00:31:03.746 "core_mask": "0x1", 00:31:03.746 "workload": "verify", 00:31:03.746 "status": "finished", 00:31:03.746 "verify_range": { 00:31:03.746 "start": 0, 00:31:03.746 "length": 16384 00:31:03.746 }, 00:31:03.746 "queue_depth": 1024, 00:31:03.746 "io_size": 4096, 00:31:03.746 "runtime": 10.061937, 00:31:03.746 "iops": 12207.391081856307, 00:31:03.746 "mibps": 47.6851214135012, 00:31:03.746 "io_failed": 0, 00:31:03.746 "io_timeout": 0, 00:31:03.746 "avg_latency_us": 83616.54563053213, 00:31:03.746 "min_latency_us": 19603.812173913044, 00:31:03.746 "max_latency_us": 53796.507826086956 00:31:03.746 } 00:31:03.746 ], 00:31:03.746 "core_count": 1 00:31:03.746 } 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1319598 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1319598 ']' 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1319598 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1319598 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1319598' 00:31:03.746 killing process with pid 1319598 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1319598 00:31:03.746 Received shutdown signal, test time was about 10.000000 seconds 00:31:03.746 00:31:03.746 Latency(us) 00:31:03.746 [2024-11-19T08:33:04.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.746 [2024-11-19T08:33:04.805Z] =================================================================================================================== 00:31:03.746 [2024-11-19T08:33:04.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:03.746 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1319598 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.005 rmmod nvme_tcp 00:31:04.005 rmmod nvme_fabrics 00:31:04.005 rmmod nvme_keyring 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1319572 ']' 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1319572 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1319572 ']' 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1319572 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:04.005 09:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1319572 00:31:04.005 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:04.005 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:04.005 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1319572' 00:31:04.005 killing process with pid 1319572 00:31:04.005 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1319572 00:31:04.005 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1319572 00:31:04.263 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.264 09:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.800 00:31:06.800 real 0m19.754s 00:31:06.800 user 0m22.921s 00:31:06.800 sys 0m6.262s 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:06.800 ************************************ 00:31:06.800 END TEST nvmf_queue_depth 00:31:06.800 ************************************ 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:06.800 ************************************ 00:31:06.800 START TEST nvmf_target_multipath 00:31:06.800 ************************************ 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:06.800 * Looking for test storage... 00:31:06.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.800 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:06.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.801 --rc genhtml_branch_coverage=1 00:31:06.801 --rc genhtml_function_coverage=1 00:31:06.801 --rc genhtml_legend=1 00:31:06.801 --rc geninfo_all_blocks=1 00:31:06.801 --rc geninfo_unexecuted_blocks=1 00:31:06.801 00:31:06.801 ' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:06.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.801 --rc genhtml_branch_coverage=1 00:31:06.801 --rc genhtml_function_coverage=1 00:31:06.801 --rc genhtml_legend=1 00:31:06.801 --rc geninfo_all_blocks=1 00:31:06.801 --rc geninfo_unexecuted_blocks=1 00:31:06.801 00:31:06.801 ' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:06.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.801 --rc genhtml_branch_coverage=1 00:31:06.801 --rc genhtml_function_coverage=1 00:31:06.801 --rc genhtml_legend=1 00:31:06.801 --rc geninfo_all_blocks=1 00:31:06.801 --rc geninfo_unexecuted_blocks=1 00:31:06.801 00:31:06.801 ' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:06.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.801 --rc genhtml_branch_coverage=1 00:31:06.801 --rc genhtml_function_coverage=1 00:31:06.801 --rc genhtml_legend=1 00:31:06.801 --rc geninfo_all_blocks=1 00:31:06.801 --rc geninfo_unexecuted_blocks=1 00:31:06.801 00:31:06.801 ' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.801 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.802 09:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.370 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:13.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:13.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:13.371 Found net devices under 0000:86:00.0: cvl_0_0 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:13.371 Found net devices under 0000:86:00.1: cvl_0_1 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:31:13.371 00:31:13.371 --- 10.0.0.2 ping statistics --- 00:31:13.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.371 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:31:13.371 00:31:13.371 --- 10.0.0.1 ping statistics --- 00:31:13.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.371 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:13.371 only one NIC for nvmf test 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:13.371 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.372 rmmod nvme_tcp 00:31:13.372 rmmod nvme_fabrics 00:31:13.372 rmmod nvme_keyring 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.372 09:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.773 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.774 00:31:14.774 real 0m8.292s 00:31:14.774 user 0m1.783s 00:31:14.774 sys 0m4.512s 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:14.774 ************************************ 00:31:14.774 END TEST nvmf_target_multipath 00:31:14.774 ************************************ 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.774 ************************************ 00:31:14.774 START TEST nvmf_zcopy 00:31:14.774 ************************************ 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:14.774 * Looking for test storage... 00:31:14.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:31:14.774 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:15.034 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.035 --rc genhtml_branch_coverage=1 00:31:15.035 --rc genhtml_function_coverage=1 00:31:15.035 --rc genhtml_legend=1 00:31:15.035 --rc geninfo_all_blocks=1 00:31:15.035 --rc geninfo_unexecuted_blocks=1 00:31:15.035 00:31:15.035 ' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.035 --rc genhtml_branch_coverage=1 00:31:15.035 --rc genhtml_function_coverage=1 00:31:15.035 --rc genhtml_legend=1 00:31:15.035 --rc geninfo_all_blocks=1 00:31:15.035 --rc geninfo_unexecuted_blocks=1 00:31:15.035 00:31:15.035 ' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.035 --rc genhtml_branch_coverage=1 00:31:15.035 --rc genhtml_function_coverage=1 00:31:15.035 --rc genhtml_legend=1 00:31:15.035 --rc geninfo_all_blocks=1 00:31:15.035 --rc geninfo_unexecuted_blocks=1 00:31:15.035 00:31:15.035 ' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:15.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.035 --rc genhtml_branch_coverage=1 00:31:15.035 --rc genhtml_function_coverage=1 00:31:15.035 --rc genhtml_legend=1 00:31:15.035 --rc geninfo_all_blocks=1 00:31:15.035 --rc geninfo_unexecuted_blocks=1 00:31:15.035 00:31:15.035 ' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.035 09:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:21.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:21.611 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:21.611 Found net devices under 0000:86:00.0: cvl_0_0 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:21.611 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:21.612 Found net devices under 0000:86:00.1: cvl_0_1 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:21.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:21.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:31:21.612 00:31:21.612 --- 10.0.0.2 ping statistics --- 00:31:21.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.612 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:21.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:21.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:31:21.612 00:31:21.612 --- 10.0.0.1 ping statistics --- 00:31:21.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:21.612 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1328759 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1328759 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1328759 ']' 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:21.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:21.612 09:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.612 [2024-11-19 09:33:21.831975] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:21.612 [2024-11-19 09:33:21.832890] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:31:21.612 [2024-11-19 09:33:21.832925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.612 [2024-11-19 09:33:21.912394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.612 [2024-11-19 09:33:21.953374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.612 [2024-11-19 09:33:21.953408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.612 [2024-11-19 09:33:21.953415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.612 [2024-11-19 09:33:21.953421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.612 [2024-11-19 09:33:21.953426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.612 [2024-11-19 09:33:21.953977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.612 [2024-11-19 09:33:22.021645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.612 [2024-11-19 09:33:22.021865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.612 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.613 [2024-11-19 09:33:22.090644] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.613 [2024-11-19 09:33:22.118860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.613 malloc0 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:21.613 { 00:31:21.613 "params": { 00:31:21.613 "name": "Nvme$subsystem", 00:31:21.613 "trtype": "$TEST_TRANSPORT", 00:31:21.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:21.613 "adrfam": "ipv4", 00:31:21.613 "trsvcid": "$NVMF_PORT", 00:31:21.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:21.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:21.613 "hdgst": ${hdgst:-false}, 00:31:21.613 "ddgst": ${ddgst:-false} 00:31:21.613 }, 00:31:21.613 "method": "bdev_nvme_attach_controller" 00:31:21.613 } 00:31:21.613 EOF 00:31:21.613 )") 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:21.613 09:33:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:21.613 "params": { 00:31:21.613 "name": "Nvme1", 00:31:21.613 "trtype": "tcp", 00:31:21.613 "traddr": "10.0.0.2", 00:31:21.613 "adrfam": "ipv4", 00:31:21.613 "trsvcid": "4420", 00:31:21.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:21.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:21.613 "hdgst": false, 00:31:21.613 "ddgst": false 00:31:21.613 }, 00:31:21.613 "method": "bdev_nvme_attach_controller" 00:31:21.613 }' 00:31:21.613 [2024-11-19 09:33:22.213256] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:31:21.613 [2024-11-19 09:33:22.213305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328836 ] 00:31:21.613 [2024-11-19 09:33:22.287834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.613 [2024-11-19 09:33:22.329079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.613 Running I/O for 10 seconds... 00:31:23.558 8294.00 IOPS, 64.80 MiB/s [2024-11-19T08:33:25.994Z] 8358.50 IOPS, 65.30 MiB/s [2024-11-19T08:33:26.930Z] 8377.67 IOPS, 65.45 MiB/s [2024-11-19T08:33:27.867Z] 8404.75 IOPS, 65.66 MiB/s [2024-11-19T08:33:28.805Z] 8412.20 IOPS, 65.72 MiB/s [2024-11-19T08:33:29.744Z] 8418.50 IOPS, 65.77 MiB/s [2024-11-19T08:33:30.680Z] 8418.71 IOPS, 65.77 MiB/s [2024-11-19T08:33:32.057Z] 8400.75 IOPS, 65.63 MiB/s [2024-11-19T08:33:32.993Z] 8404.00 IOPS, 65.66 MiB/s [2024-11-19T08:33:32.993Z] 8407.80 IOPS, 65.69 MiB/s 00:31:31.934 Latency(us) 00:31:31.934 [2024-11-19T08:33:32.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:31.934 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:31.934 Verification LBA range: start 0x0 length 0x1000 00:31:31.934 Nvme1n1 : 10.01 8409.87 65.70 0.00 0.00 15177.18 1296.47 22909.11 00:31:31.934 [2024-11-19T08:33:32.993Z] =================================================================================================================== 00:31:31.934 [2024-11-19T08:33:32.993Z] Total : 8409.87 65.70 0.00 0.00 15177.18 1296.47 22909.11 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1330604 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:31.934 { 00:31:31.934 "params": { 00:31:31.934 "name": "Nvme$subsystem", 00:31:31.934 "trtype": "$TEST_TRANSPORT", 00:31:31.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.934 "adrfam": "ipv4", 00:31:31.934 "trsvcid": "$NVMF_PORT", 00:31:31.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.934 "hdgst": ${hdgst:-false}, 00:31:31.934 "ddgst": ${ddgst:-false} 00:31:31.934 }, 00:31:31.934 "method": "bdev_nvme_attach_controller" 00:31:31.934 } 00:31:31.934 EOF 00:31:31.934 )") 00:31:31.934 [2024-11-19 09:33:32.802310] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.802341] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:31.934 09:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:31.934 "params": { 00:31:31.934 "name": "Nvme1", 00:31:31.934 "trtype": "tcp", 00:31:31.934 "traddr": "10.0.0.2", 00:31:31.934 "adrfam": "ipv4", 00:31:31.934 "trsvcid": "4420", 00:31:31.934 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:31.934 "hdgst": false, 00:31:31.934 "ddgst": false 00:31:31.934 }, 00:31:31.934 "method": "bdev_nvme_attach_controller" 00:31:31.934 }' 00:31:31.934 [2024-11-19 09:33:32.814278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.814290] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.826278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.826288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.838276] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.838285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.844960] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:31:31.934 [2024-11-19 09:33:32.845012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330604 ] 00:31:31.934 [2024-11-19 09:33:32.850276] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.850292] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.862276] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.862285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.874277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.874286] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.886277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.886285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.898277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.898286] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.910287] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.934 [2024-11-19 09:33:32.910297] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.934 [2024-11-19 09:33:32.916871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.934 [2024-11-19 09:33:32.922280] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.935 [2024-11-19 09:33:32.922291] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.935 [2024-11-19 09:33:32.934279] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.935 [2024-11-19 09:33:32.934292] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.935 [2024-11-19 09:33:32.946277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.935 [2024-11-19 09:33:32.946286] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.935 [2024-11-19 09:33:32.958062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.935 [2024-11-19 09:33:32.958279] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.935 [2024-11-19 09:33:32.958289] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.935 [2024-11-19 09:33:32.970280] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.935 [2024-11-19 09:33:32.970295] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:31.935 [2024-11-19 09:33:32.982290] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:31.935 [2024-11-19 09:33:32.982312] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:32.994283] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:32.994297] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.006278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.006290] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.018280] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.018292] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.030275] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.030285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.042293] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.042312] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.054287] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.054304] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.066286] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.066305] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.078283] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.078295] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.090279] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.090293] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.102284] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.102301] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 Running I/O for 5 seconds... 00:31:32.194 [2024-11-19 09:33:33.119965] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.119985] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.135358] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.135382] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.145981] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.146000] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.160441] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.160460] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.175607] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.175625] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.186205] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.186224] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.200466] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.200485] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.215733] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.215756] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.230494] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.230513] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.194 [2024-11-19 09:33:33.242638] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.194 [2024-11-19 09:33:33.242655] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.256378] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.256397] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.271565] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.271583] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.286221] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.286243] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.298921] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.298940] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.311619] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.311637] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.327196] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.327215] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.342527] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.342545] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.354090] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.354108] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.368138] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.368157] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.383315] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.383332] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.398771] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.398788] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.410955] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.410973] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.424572] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.424590] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.439479] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.439498] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.454084] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.454103] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.466300] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.466318] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.479977] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.479995] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.453 [2024-11-19 09:33:33.494852] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.453 [2024-11-19 09:33:33.494870] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.712 [2024-11-19 09:33:33.510642] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.712 [2024-11-19 09:33:33.510661] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.712 [2024-11-19 09:33:33.523035] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.712 [2024-11-19 09:33:33.523054] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.712 [2024-11-19 09:33:33.535867] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.535885] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.550626] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.550644] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.566476] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.566495] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.578536] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.578554] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.594159] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.594178] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.607002] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.607020] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.619802] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.619820] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.630224] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.630243] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.644411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.644430] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.659444] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.659464] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.673986] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.674006] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.688357] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.688376] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.703585] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.703604] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.718505] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.718524] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.731212] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.731231] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.746620] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.746639] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.713 [2024-11-19 09:33:33.762690] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.713 [2024-11-19 09:33:33.762708] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.778173] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.778192] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.792661] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.792680] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.807405] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.807423] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.822481] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.822500] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.835075] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.835093] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.848252] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.848270] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.863255] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.863274] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.874223] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.874243] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.888416] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.888434] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.903478] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.903498] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.918473] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.918491] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.930843] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.930861] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.944014] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.944034] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.958943] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.958969] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.973748] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.973767] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:33.988268] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:33.988287] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:34.002922] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:34.002941] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:32.972 [2024-11-19 09:33:34.018308] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:32.972 [2024-11-19 09:33:34.018327] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.032337] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.032357] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.047522] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.047541] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.062411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.062431] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.075005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.075023] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.087868] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.087886] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.103227] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.103245] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 16410.00 IOPS, 128.20 MiB/s [2024-11-19T08:33:34.290Z] [2024-11-19 09:33:34.119228] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.119251] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.134751] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.134769] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.150942] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.150966] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.161648] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.161666] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.175756] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.175775] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.190685] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.190702] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.202583] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.202600] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.216208] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.216226] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.230678] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.230696] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.246668] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.246686] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.259042] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.259060] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.231 [2024-11-19 09:33:34.274050] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.231 [2024-11-19 09:33:34.274070] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.286096] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.286115] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.300276] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.300294] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.314892] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.314911] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.330716] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.330734] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.347114] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.347131] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.362508] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.362526] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.376270] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.376288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.391640] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.391663] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.406362] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.406380] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.417291] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.417309] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.432088] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.432106] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.447159] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.447176] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.457832] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.457850] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.472094] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.472113] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.487286] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.487304] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.502308] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.502327] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.514858] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.514876] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.527625] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.527643] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.491 [2024-11-19 09:33:34.538647] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.491 [2024-11-19 09:33:34.538665] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.552171] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.552190] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.567154] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.567172] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.581871] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.581888] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.595851] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.595869] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.610514] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.610532] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.623272] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.623291] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.638133] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.638151] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.652375] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.652398] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.667143] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.667161] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.750 [2024-11-19 09:33:34.682072] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.750 [2024-11-19 09:33:34.682091] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.696217] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.696236] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.711483] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.711502] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.726553] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.726570] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.737909] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.737927] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.752411] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.752429] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.768026] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.768045] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.782998] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.783015] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:33.751 [2024-11-19 09:33:34.797663] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:33.751 [2024-11-19 09:33:34.797681] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.811582] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.811601] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.822391] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.822410] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.836465] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.836483] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.851713] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.851732] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.866841] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.866859] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.882509] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.882528] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.893787] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.893805] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.908282] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.908300] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.922933] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.922962] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.938318] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.938336] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.950426] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.950444] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.964593] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.964611] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.979954] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.979972] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:34.994586] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:34.994604] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:35.006981] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:35.006999] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:35.019669] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:35.019687] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:35.034435] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:35.034454] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:35.047253] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:35.047271] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.011 [2024-11-19 09:33:35.062213] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.011 [2024-11-19 09:33:35.062233] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.270 [2024-11-19 09:33:35.074933] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.270 [2024-11-19 09:33:35.074958] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.270 [2024-11-19 09:33:35.087974] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.087993] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.103681] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.103701] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 16441.00 IOPS, 128.45 MiB/s [2024-11-19T08:33:35.330Z] [2024-11-19 09:33:35.119164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.119183] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.133976] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.133996] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.147523] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.147542] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.162664] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.162682] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.174939] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.174964] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.187876] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.187895] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.203563] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.203582] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.218424] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.218443] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.231075] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.231095] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.243695] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.243713] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.258747] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.258765] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.274629] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.274648] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.286253] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.286272] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.300302] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.300320] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.271 [2024-11-19 09:33:35.315255] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.271 [2024-11-19 09:33:35.315273] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.330301] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.330322] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.341097] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.341116] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.355789] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.355809] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.371188] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.371206] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.386568] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.386585] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.400550] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.400569] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.415824] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.415843] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.430644] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.430662] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.442056] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.442074] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.530 [2024-11-19 09:33:35.456805] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.530 [2024-11-19 09:33:35.456824] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.471674] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.471694] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.486242] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.486262] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.497480] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.497499] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.512343] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.512362] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.527999] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.528018] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.542983] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.543002] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.558858] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.558876] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.570860] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.570878] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.531 [2024-11-19 09:33:35.584319] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.531 [2024-11-19 09:33:35.584337] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.599653] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.599671] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.614540] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.614559] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.625323] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.625341] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.640504] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.640521] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.655255] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.655272] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.669886] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.669904] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.683805] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.683823] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.699213] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.699231] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.714364] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.714388] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.790 [2024-11-19 09:33:35.727005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.790 [2024-11-19 09:33:35.727023] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.739577] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.739595] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.754721] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.754739] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.770363] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.770382] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.782957] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.782975] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.795704] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.795722] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.806860] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.806877] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.820271] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.820289] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:34.791 [2024-11-19 09:33:35.835346] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:34.791 [2024-11-19 09:33:35.835365] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.850102] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.850121] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.864175] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.864193] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.879575] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.879593] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.894429] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.894447] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.907427] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.907444] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.918489] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.918509] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.931951] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.050 [2024-11-19 09:33:35.931969] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.050 [2024-11-19 09:33:35.947125] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:35.947143] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:35.962457] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:35.962476] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:35.975293] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:35.975314] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:35.990741] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:35.990760] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.002346] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.002364] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.016281] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.016299] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.031650] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.031668] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.046324] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.046342] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.060092] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.060110] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.075122] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.075140] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.089911] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.089929] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.051 [2024-11-19 09:33:36.103191] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.051 [2024-11-19 09:33:36.103209] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.310 [2024-11-19 09:33:36.118650] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.310 [2024-11-19 09:33:36.118668] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.310 16432.00 IOPS, 128.38 MiB/s [2024-11-19T08:33:36.370Z] [2024-11-19 09:33:36.130072] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.130090] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.144433] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.144452] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.159319] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.159338] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.174417] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.174435] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.186723] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.186740] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.200247] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.200265] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.215448] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.215466] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.230036] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.230056] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.243669] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.243691] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.258504] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.258522] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.271391] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.271409] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.282562] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.282579] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.296090] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.296109] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.311218] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.311236] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.326005] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.326023] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.340471] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.340489] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.311 [2024-11-19 09:33:36.355285] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.311 [2024-11-19 09:33:36.355303] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.570 [2024-11-19 09:33:36.370332] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.570 [2024-11-19 09:33:36.370353] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.570 [2024-11-19 09:33:36.383257] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.570 [2024-11-19 09:33:36.383275] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.570 [2024-11-19 09:33:36.398273] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.570 [2024-11-19 09:33:36.398291] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.570 [2024-11-19 09:33:36.412822] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.570 [2024-11-19 09:33:36.412841] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.570 [2024-11-19 09:33:36.427967] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.570 [2024-11-19 09:33:36.427985] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.570 [2024-11-19 09:33:36.443194] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.570 [2024-11-19 09:33:36.443211] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.453376] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.453394] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.467943] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.467965] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.483439] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.483458] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.498471] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.498489] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.508633] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.508651] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.523674] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.523693] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.539132] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.539152] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.554008] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.554028] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.568216] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.568236] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.583855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.583874] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.599070] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.599088] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.571 [2024-11-19 09:33:36.614593] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.571 [2024-11-19 09:33:36.614611] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.627612] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.627632] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.638130] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.638149] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.652256] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.652276] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.666891] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.666909] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.682113] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.682132] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.696825] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.696845] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.712006] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.712025] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.726534] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.726553] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.739071] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.739089] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.754039] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.754058] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.768077] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.768097] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.783122] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.783141] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.793972] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.793990] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.807791] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.807810] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.822935] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.822960] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.833934] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.833958] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.848672] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.848690] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.863396] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.863415] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:35.830 [2024-11-19 09:33:36.878511] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:35.830 [2024-11-19 09:33:36.878531] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.890200] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.890221] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.904142] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.904161] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.919111] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.919128] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.934469] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.934487] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.946028] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.946047] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.960507] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.960526] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.975483] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.975500] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:36.986091] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:36.986110] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.000164] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.000182] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.015100] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.015119] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.030718] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.030736] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.043514] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.043532] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.058239] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.058257] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.070855] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.070872] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.084358] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.084376] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.099117] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.099135] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.114450] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.114468] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 16435.00 IOPS, 128.40 MiB/s [2024-11-19T08:33:37.149Z] [2024-11-19 09:33:37.125290] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.125308] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.090 [2024-11-19 09:33:37.140049] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.090 [2024-11-19 09:33:37.140067] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.155561] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.155579] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.170484] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.170502] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.180931] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.180957] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.196306] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.196325] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.211067] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.211084] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.226290] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.226309] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.237485] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.237503] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.252478] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.252496] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.267526] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.267544] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.277624] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.277642] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.292416] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.292442] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.307791] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.307809] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.323346] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.323365] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.338014] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.338033] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.351998] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.352015] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.366791] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.366808] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.382553] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.382572] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.350 [2024-11-19 09:33:37.393094] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.350 [2024-11-19 09:33:37.393113] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.408708] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.408729] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.423994] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.424012] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.438882] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.438899] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.453896] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.453914] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.467556] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.467574] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.478075] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.478094] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.492445] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.492464] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.507583] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.507601] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.522595] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.522612] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.535373] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.535391] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.550640] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.550657] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.563050] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.563072] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.575960] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.575978] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.591543] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.591561] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.606417] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.606435] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.617746] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.617764] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.632320] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.632338] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.647621] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.647638] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.609 [2024-11-19 09:33:37.663226] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.609 [2024-11-19 09:33:37.663243] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.674964] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.674982] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.687915] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.687933] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.702760] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.702777] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.714239] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.714258] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.728535] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.728553] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.743438] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.743456] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.758738] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.758755] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.770994] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.771011] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.783661] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.783679] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.799026] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.799044] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.810251] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.869 [2024-11-19 09:33:37.810270] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.869 [2024-11-19 09:33:37.824229] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.824251] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.870 [2024-11-19 09:33:37.840063] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.840082] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.870 [2024-11-19 09:33:37.855027] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.855044] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.870 [2024-11-19 09:33:37.870050] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.870069] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.870 [2024-11-19 09:33:37.883263] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.883282] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.870 [2024-11-19 09:33:37.898622] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.898640] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:36.870 [2024-11-19 09:33:37.914463] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:36.870 [2024-11-19 09:33:37.914482] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.129 [2024-11-19 09:33:37.928248] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.129 [2024-11-19 09:33:37.928267] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.129 [2024-11-19 09:33:37.943634] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.129 [2024-11-19 09:33:37.943653] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.129 [2024-11-19 09:33:37.958601] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:37.958618] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:37.973967] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:37.973986] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:37.988366] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:37.988384] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.003278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.003297] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.018135] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.018155] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.029967] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.029986] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.044637] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.044656] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.059810] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.059829] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.074907] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.074925] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.090402] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.090421] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.101233] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.101257] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.116360] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.116380] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 16415.60 IOPS, 128.25 MiB/s [2024-11-19T08:33:38.189Z] [2024-11-19 09:33:38.130027] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.130047] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 00:31:37.130 Latency(us) 00:31:37.130 [2024-11-19T08:33:38.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.130 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:37.130 Nvme1n1 : 5.01 16416.02 128.25 0.00 0.00 7789.29 2080.06 13335.15 00:31:37.130 [2024-11-19T08:33:38.189Z] =================================================================================================================== 00:31:37.130 [2024-11-19T08:33:38.189Z] Total : 16416.02 128.25 0.00 0.00 7789.29 2080.06 13335.15 00:31:37.130 [2024-11-19 09:33:38.138282] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.138299] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.150278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.150292] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.162291] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.162308] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.130 [2024-11-19 09:33:38.174284] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.130 [2024-11-19 09:33:38.174301] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.389 [2024-11-19 09:33:38.186286] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.389 [2024-11-19 09:33:38.186301] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.389 [2024-11-19 09:33:38.198282] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.389 [2024-11-19 09:33:38.198297] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.210279] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.210293] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.222280] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.222294] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.234278] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.234293] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.246273] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.246283] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.258277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.258288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.270277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.270288] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.282276] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.282285] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 [2024-11-19 09:33:38.294277] subsystem.c:2273:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:37.390 [2024-11-19 09:33:38.294287] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:37.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1330604) - No such process 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1330604 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:37.390 delay0 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.390 09:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:37.390 [2024-11-19 09:33:38.443644] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:45.507 Initializing NVMe Controllers 00:31:45.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:45.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:45.507 Initialization complete. Launching workers. 00:31:45.507 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 27970 00:31:45.507 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28081, failed to submit 130 00:31:45.507 success 27984, unsuccessful 97, failed 0 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.507 rmmod nvme_tcp 00:31:45.507 rmmod nvme_fabrics 00:31:45.507 rmmod nvme_keyring 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1328759 ']' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1328759 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1328759 ']' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1328759 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1328759 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1328759' 00:31:45.507 killing process with pid 1328759 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1328759 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1328759 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.507 09:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.415 09:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.415 00:31:47.415 real 0m32.252s 00:31:47.415 user 0m41.657s 00:31:47.415 sys 0m12.979s 00:31:47.415 09:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:47.415 09:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:47.415 ************************************ 00:31:47.415 END TEST nvmf_zcopy 00:31:47.415 ************************************ 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:47.415 ************************************ 00:31:47.415 START TEST nvmf_nmic 00:31:47.415 ************************************ 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:47.415 * Looking for test storage... 00:31:47.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.415 --rc genhtml_branch_coverage=1 00:31:47.415 --rc genhtml_function_coverage=1 00:31:47.415 --rc genhtml_legend=1 00:31:47.415 --rc geninfo_all_blocks=1 00:31:47.415 --rc geninfo_unexecuted_blocks=1 00:31:47.415 00:31:47.415 ' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.415 --rc genhtml_branch_coverage=1 00:31:47.415 --rc genhtml_function_coverage=1 00:31:47.415 --rc genhtml_legend=1 00:31:47.415 --rc geninfo_all_blocks=1 00:31:47.415 --rc geninfo_unexecuted_blocks=1 00:31:47.415 00:31:47.415 ' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.415 --rc genhtml_branch_coverage=1 00:31:47.415 --rc genhtml_function_coverage=1 00:31:47.415 --rc genhtml_legend=1 00:31:47.415 --rc geninfo_all_blocks=1 00:31:47.415 --rc geninfo_unexecuted_blocks=1 00:31:47.415 00:31:47.415 ' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:47.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.415 --rc genhtml_branch_coverage=1 00:31:47.415 --rc genhtml_function_coverage=1 00:31:47.415 --rc genhtml_legend=1 00:31:47.415 --rc geninfo_all_blocks=1 00:31:47.415 --rc geninfo_unexecuted_blocks=1 00:31:47.415 00:31:47.415 ' 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.415 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.416 09:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:53.991 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:53.991 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:53.991 Found net devices under 0000:86:00.0: cvl_0_0 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:53.991 Found net devices under 0000:86:00.1: cvl_0_1 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.991 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:53.992 09:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:53.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:31:53.992 00:31:53.992 --- 10.0.0.2 ping statistics --- 00:31:53.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.992 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:31:53.992 00:31:53.992 --- 10.0.0.1 ping statistics --- 00:31:53.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.992 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1336013 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1336013 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1336013 ']' 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 [2024-11-19 09:33:54.175492] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:53.992 [2024-11-19 09:33:54.176506] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:31:53.992 [2024-11-19 09:33:54.176547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.992 [2024-11-19 09:33:54.257322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.992 [2024-11-19 09:33:54.300345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.992 [2024-11-19 09:33:54.300384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.992 [2024-11-19 09:33:54.300391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.992 [2024-11-19 09:33:54.300397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.992 [2024-11-19 09:33:54.300402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.992 [2024-11-19 09:33:54.302006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.992 [2024-11-19 09:33:54.302112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.992 [2024-11-19 09:33:54.302196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.992 [2024-11-19 09:33:54.302196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.992 [2024-11-19 09:33:54.371233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:53.992 [2024-11-19 09:33:54.371828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:53.992 [2024-11-19 09:33:54.372243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:53.992 [2024-11-19 09:33:54.372607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:53.992 [2024-11-19 09:33:54.372653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 [2024-11-19 09:33:54.451085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 Malloc0 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.992 [2024-11-19 09:33:54.535309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.992 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:53.993 test case1: single bdev can't be used in multiple subsystems 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.993 [2024-11-19 09:33:54.562746] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:53.993 [2024-11-19 09:33:54.562771] subsystem.c:2300:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:53.993 [2024-11-19 09:33:54.562780] nvmf_rpc.c:1539:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:53.993 request: 00:31:53.993 { 00:31:53.993 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:53.993 "namespace": { 00:31:53.993 "bdev_name": "Malloc0", 00:31:53.993 "no_auto_visible": false 00:31:53.993 }, 00:31:53.993 "method": "nvmf_subsystem_add_ns", 00:31:53.993 "req_id": 1 00:31:53.993 } 00:31:53.993 Got JSON-RPC error response 00:31:53.993 response: 00:31:53.993 { 00:31:53.993 "code": -32602, 00:31:53.993 "message": "Invalid parameters" 00:31:53.993 } 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:53.993 Adding namespace failed - expected result. 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:53.993 test case2: host connect to nvmf target in multiple paths 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:53.993 [2024-11-19 09:33:54.574859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:53.993 09:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:54.324 09:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:54.324 09:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:31:54.324 09:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:54.324 09:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:54.324 09:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:31:56.308 09:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:56.308 [global] 00:31:56.308 thread=1 00:31:56.308 invalidate=1 00:31:56.308 rw=write 00:31:56.308 time_based=1 00:31:56.308 runtime=1 00:31:56.308 ioengine=libaio 00:31:56.308 direct=1 00:31:56.308 bs=4096 00:31:56.308 iodepth=1 00:31:56.308 norandommap=0 00:31:56.308 numjobs=1 00:31:56.308 00:31:56.308 verify_dump=1 00:31:56.308 verify_backlog=512 00:31:56.308 verify_state_save=0 00:31:56.308 do_verify=1 00:31:56.308 verify=crc32c-intel 00:31:56.308 [job0] 00:31:56.308 filename=/dev/nvme0n1 00:31:56.308 Could not set queue depth (nvme0n1) 00:31:56.565 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:56.565 fio-3.35 00:31:56.565 Starting 1 thread 00:31:57.498 00:31:57.498 job0: (groupid=0, jobs=1): err= 0: pid=1336802: Tue Nov 19 09:33:58 2024 00:31:57.498 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:31:57.498 slat (nsec): min=7134, max=39972, avg=8222.64, stdev=1417.13 00:31:57.498 clat (usec): min=183, max=268, avg=207.93, stdev= 9.88 00:31:57.498 lat (usec): min=196, max=276, avg=216.15, stdev= 9.88 00:31:57.498 clat percentiles (usec): 00:31:57.498 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 196], 20.00th=[ 200], 00:31:57.498 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 210], 00:31:57.498 | 70.00th=[ 212], 80.00th=[ 215], 90.00th=[ 219], 95.00th=[ 223], 00:31:57.498 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 265], 99.95th=[ 265], 00:31:57.498 | 99.99th=[ 269] 00:31:57.498 write: IOPS=2603, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets 00:31:57.498 slat (usec): min=10, max=26898, avg=22.17, stdev=526.68 00:31:57.498 clat (usec): min=109, max=349, avg=143.05, stdev=14.08 00:31:57.498 lat (usec): min=141, max=27197, avg=165.22, stdev=529.92 00:31:57.498 clat percentiles (usec): 00:31:57.498 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:31:57.498 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:31:57.498 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 184], 00:31:57.498 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 212], 99.95th=[ 297], 00:31:57.498 | 99.99th=[ 351] 00:31:57.498 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:57.498 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:57.498 lat (usec) : 250=99.44%, 500=0.56% 00:31:57.498 cpu : usr=5.20%, sys=7.30%, ctx=5168, majf=0, minf=1 00:31:57.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.498 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.498 issued rwts: total=2560,2606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:57.498 00:31:57.498 Run status group 0 (all jobs): 00:31:57.498 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:57.498 WRITE: bw=10.2MiB/s (10.7MB/s), 10.2MiB/s-10.2MiB/s (10.7MB/s-10.7MB/s), io=10.2MiB (10.7MB), run=1001-1001msec 00:31:57.498 00:31:57.498 Disk stats (read/write): 00:31:57.498 nvme0n1: ios=2142/2560, merge=0/0, ticks=1402/346, in_queue=1748, util=98.40% 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:57.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:57.757 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:57.757 rmmod nvme_tcp 00:31:57.757 rmmod nvme_fabrics 00:31:57.757 rmmod nvme_keyring 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1336013 ']' 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1336013 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1336013 ']' 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1336013 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1336013 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1336013' 00:31:58.016 killing process with pid 1336013 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1336013 00:31:58.016 09:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1336013 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.274 09:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.178 00:32:00.178 real 0m13.101s 00:32:00.178 user 0m24.140s 00:32:00.178 sys 0m6.201s 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:00.178 ************************************ 00:32:00.178 END TEST nvmf_nmic 00:32:00.178 ************************************ 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.178 ************************************ 00:32:00.178 START TEST nvmf_fio_target 00:32:00.178 ************************************ 00:32:00.178 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:00.438 * Looking for test storage... 00:32:00.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:00.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.438 --rc genhtml_branch_coverage=1 00:32:00.438 --rc genhtml_function_coverage=1 00:32:00.438 --rc genhtml_legend=1 00:32:00.438 --rc geninfo_all_blocks=1 00:32:00.438 --rc geninfo_unexecuted_blocks=1 00:32:00.438 00:32:00.438 ' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:00.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.438 --rc genhtml_branch_coverage=1 00:32:00.438 --rc genhtml_function_coverage=1 00:32:00.438 --rc genhtml_legend=1 00:32:00.438 --rc geninfo_all_blocks=1 00:32:00.438 --rc geninfo_unexecuted_blocks=1 00:32:00.438 00:32:00.438 ' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:00.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.438 --rc genhtml_branch_coverage=1 00:32:00.438 --rc genhtml_function_coverage=1 00:32:00.438 --rc genhtml_legend=1 00:32:00.438 --rc geninfo_all_blocks=1 00:32:00.438 --rc geninfo_unexecuted_blocks=1 00:32:00.438 00:32:00.438 ' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:00.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.438 --rc genhtml_branch_coverage=1 00:32:00.438 --rc genhtml_function_coverage=1 00:32:00.438 --rc genhtml_legend=1 00:32:00.438 --rc geninfo_all_blocks=1 00:32:00.438 --rc geninfo_unexecuted_blocks=1 00:32:00.438 00:32:00.438 ' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.438 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.439 09:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.007 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.007 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.007 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:07.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:07.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:07.008 Found net devices under 0000:86:00.0: cvl_0_0 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:07.008 Found net devices under 0000:86:00.1: cvl_0_1 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:32:07.008 00:32:07.008 --- 10.0.0.2 ping statistics --- 00:32:07.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.008 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:32:07.008 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:32:07.008 00:32:07.009 --- 10.0.0.1 ping statistics --- 00:32:07.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.009 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1340387 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1340387 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1340387 ']' 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.009 [2024-11-19 09:34:07.354320] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.009 [2024-11-19 09:34:07.355242] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:32:07.009 [2024-11-19 09:34:07.355277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.009 [2024-11-19 09:34:07.435192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:07.009 [2024-11-19 09:34:07.477826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:07.009 [2024-11-19 09:34:07.477865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:07.009 [2024-11-19 09:34:07.477872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:07.009 [2024-11-19 09:34:07.477879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:07.009 [2024-11-19 09:34:07.477884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:07.009 [2024-11-19 09:34:07.479465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:07.009 [2024-11-19 09:34:07.479575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:07.009 [2024-11-19 09:34:07.479723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.009 [2024-11-19 09:34:07.479724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.009 [2024-11-19 09:34:07.547796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:07.009 [2024-11-19 09:34:07.548534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:07.009 [2024-11-19 09:34:07.548796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:07.009 [2024-11-19 09:34:07.549166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:07.009 [2024-11-19 09:34:07.549220] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:07.009 [2024-11-19 09:34:07.792377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.009 09:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.009 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:07.009 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.268 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:07.268 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.527 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:07.527 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:07.786 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:07.786 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:08.045 09:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.045 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:08.045 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.304 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:08.304 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:08.563 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:08.563 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:08.822 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:08.822 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:08.822 09:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:09.080 09:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:09.080 09:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:09.338 09:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.597 [2024-11-19 09:34:10.428292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.597 09:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:09.855 09:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:09.855 09:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:10.114 09:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:10.114 09:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:32:10.114 09:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:32:10.114 09:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:32:10.114 09:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:32:10.114 09:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:32:12.645 09:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:12.645 [global] 00:32:12.645 thread=1 00:32:12.645 invalidate=1 00:32:12.645 rw=write 00:32:12.645 time_based=1 00:32:12.645 runtime=1 00:32:12.645 ioengine=libaio 00:32:12.645 direct=1 00:32:12.645 bs=4096 00:32:12.645 iodepth=1 00:32:12.645 norandommap=0 00:32:12.645 numjobs=1 00:32:12.645 00:32:12.645 verify_dump=1 00:32:12.645 verify_backlog=512 00:32:12.645 verify_state_save=0 00:32:12.645 do_verify=1 00:32:12.645 verify=crc32c-intel 00:32:12.645 [job0] 00:32:12.645 filename=/dev/nvme0n1 00:32:12.645 [job1] 00:32:12.645 filename=/dev/nvme0n2 00:32:12.645 [job2] 00:32:12.645 filename=/dev/nvme0n3 00:32:12.645 [job3] 00:32:12.645 filename=/dev/nvme0n4 00:32:12.645 Could not set queue depth (nvme0n1) 00:32:12.645 Could not set queue depth (nvme0n2) 00:32:12.645 Could not set queue depth (nvme0n3) 00:32:12.645 Could not set queue depth (nvme0n4) 00:32:12.645 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.645 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.645 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.645 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:12.645 fio-3.35 00:32:12.645 Starting 4 threads 00:32:14.020 00:32:14.020 job0: (groupid=0, jobs=1): err= 0: pid=1341679: Tue Nov 19 09:34:14 2024 00:32:14.020 read: IOPS=23, BW=94.8KiB/s (97.0kB/s)(96.0KiB/1013msec) 00:32:14.020 slat (nsec): min=7575, max=26431, avg=21519.12, stdev=4831.61 00:32:14.020 clat (usec): min=382, max=41965, avg=37670.35, stdev=11434.15 00:32:14.020 lat (usec): min=392, max=41989, avg=37691.87, stdev=11435.17 00:32:14.020 clat percentiles (usec): 00:32:14.020 | 1.00th=[ 383], 5.00th=[ 725], 10.00th=[40633], 20.00th=[41157], 00:32:14.020 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:14.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:14.020 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:14.020 | 99.99th=[42206] 00:32:14.020 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:32:14.020 slat (nsec): min=9721, max=52507, avg=12958.54, stdev=4475.49 00:32:14.020 clat (usec): min=135, max=418, avg=193.15, stdev=24.93 00:32:14.020 lat (usec): min=145, max=428, avg=206.11, stdev=25.09 00:32:14.020 clat percentiles (usec): 00:32:14.020 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 176], 00:32:14.020 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:32:14.020 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 227], 00:32:14.020 | 99.00th=[ 277], 99.50th=[ 338], 99.90th=[ 420], 99.95th=[ 420], 00:32:14.020 | 99.99th=[ 420] 00:32:14.020 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:32:14.020 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:14.020 lat (usec) : 250=94.03%, 500=1.68%, 750=0.19% 00:32:14.020 lat (msec) : 50=4.10% 00:32:14.020 cpu : usr=0.30%, sys=0.59%, ctx=538, majf=0, minf=1 00:32:14.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.020 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.020 job1: (groupid=0, jobs=1): err= 0: pid=1341680: Tue Nov 19 09:34:14 2024 00:32:14.020 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:32:14.020 slat (nsec): min=9664, max=24121, avg=22573.14, stdev=2907.85 00:32:14.020 clat (usec): min=40747, max=41075, avg=40958.02, stdev=64.01 00:32:14.020 lat (usec): min=40757, max=41098, avg=40980.60, stdev=66.05 00:32:14.020 clat percentiles (usec): 00:32:14.020 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:14.020 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:14.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:14.020 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:14.020 | 99.99th=[41157] 00:32:14.020 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:14.020 slat (nsec): min=9851, max=39673, avg=11806.59, stdev=2933.72 00:32:14.020 clat (usec): min=147, max=387, avg=185.06, stdev=20.47 00:32:14.020 lat (usec): min=161, max=427, avg=196.87, stdev=20.84 00:32:14.020 clat percentiles (usec): 00:32:14.020 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 174], 00:32:14.020 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:32:14.020 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 200], 95.00th=[ 204], 00:32:14.020 | 99.00th=[ 258], 99.50th=[ 330], 99.90th=[ 388], 99.95th=[ 388], 00:32:14.020 | 99.99th=[ 388] 00:32:14.020 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:32:14.020 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:14.020 lat (usec) : 250=94.76%, 500=1.12% 00:32:14.020 lat (msec) : 50=4.12% 00:32:14.020 cpu : usr=0.30%, sys=0.50%, ctx=535, majf=0, minf=1 00:32:14.020 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.020 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.020 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.020 job2: (groupid=0, jobs=1): err= 0: pid=1341681: Tue Nov 19 09:34:14 2024 00:32:14.020 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:32:14.020 slat (nsec): min=10626, max=24383, avg=22377.36, stdev=2658.37 00:32:14.020 clat (usec): min=40526, max=41944, avg=40995.03, stdev=235.08 00:32:14.020 lat (usec): min=40537, max=41967, avg=41017.40, stdev=236.21 00:32:14.020 clat percentiles (usec): 00:32:14.020 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:14.020 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:14.020 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:14.020 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:14.020 | 99.99th=[42206] 00:32:14.020 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:32:14.020 slat (nsec): min=9274, max=49618, avg=12254.90, stdev=5084.61 00:32:14.020 clat (usec): min=140, max=255, avg=183.01, stdev=15.50 00:32:14.020 lat (usec): min=151, max=265, avg=195.26, stdev=15.34 00:32:14.020 clat percentiles (usec): 00:32:14.020 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 172], 00:32:14.020 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:32:14.020 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 208], 00:32:14.020 | 99.00th=[ 229], 99.50th=[ 239], 99.90th=[ 255], 99.95th=[ 255], 00:32:14.020 | 99.99th=[ 255] 00:32:14.020 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:32:14.020 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:14.021 lat (usec) : 250=95.51%, 500=0.37% 00:32:14.021 lat (msec) : 50=4.12% 00:32:14.021 cpu : usr=0.00%, sys=0.80%, ctx=535, majf=0, minf=2 00:32:14.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.021 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.021 job3: (groupid=0, jobs=1): err= 0: pid=1341682: Tue Nov 19 09:34:14 2024 00:32:14.021 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:32:14.021 slat (nsec): min=9987, max=24684, avg=22580.82, stdev=2849.41 00:32:14.021 clat (usec): min=40865, max=41069, avg=40962.38, stdev=45.41 00:32:14.021 lat (usec): min=40882, max=41094, avg=40984.96, stdev=46.88 00:32:14.021 clat percentiles (usec): 00:32:14.021 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:14.021 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:14.021 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:14.021 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:14.021 | 99.99th=[41157] 00:32:14.021 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:32:14.021 slat (nsec): min=9517, max=68528, avg=10552.35, stdev=2698.82 00:32:14.021 clat (usec): min=134, max=389, avg=211.76, stdev=41.08 00:32:14.021 lat (usec): min=145, max=399, avg=222.32, stdev=41.39 00:32:14.021 clat percentiles (usec): 00:32:14.021 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 172], 20.00th=[ 188], 00:32:14.021 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:32:14.021 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 281], 95.00th=[ 293], 00:32:14.021 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 392], 99.95th=[ 392], 00:32:14.021 | 99.99th=[ 392] 00:32:14.021 bw ( KiB/s): min= 4096, max= 4096, per=50.85%, avg=4096.00, stdev= 0.00, samples=1 00:32:14.021 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:14.021 lat (usec) : 250=83.71%, 500=12.17% 00:32:14.021 lat (msec) : 50=4.12% 00:32:14.021 cpu : usr=0.30%, sys=0.49%, ctx=534, majf=0, minf=1 00:32:14.021 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.021 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.021 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:14.021 00:32:14.021 Run status group 0 (all jobs): 00:32:14.021 READ: bw=354KiB/s (362kB/s), 86.5KiB/s-94.8KiB/s (88.6kB/s-97.0kB/s), io=360KiB (369kB), run=1004-1017msec 00:32:14.021 WRITE: bw=8055KiB/s (8248kB/s), 2014KiB/s-2040KiB/s (2062kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1017msec 00:32:14.021 00:32:14.021 Disk stats (read/write): 00:32:14.021 nvme0n1: ios=72/512, merge=0/0, ticks=1647/94, in_queue=1741, util=98.10% 00:32:14.021 nvme0n2: ios=43/512, merge=0/0, ticks=1726/91, in_queue=1817, util=98.48% 00:32:14.021 nvme0n3: ios=18/512, merge=0/0, ticks=739/93, in_queue=832, util=89.07% 00:32:14.021 nvme0n4: ios=73/512, merge=0/0, ticks=811/106, in_queue=917, util=90.98% 00:32:14.021 09:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:14.021 [global] 00:32:14.021 thread=1 00:32:14.021 invalidate=1 00:32:14.021 rw=randwrite 00:32:14.021 time_based=1 00:32:14.021 runtime=1 00:32:14.021 ioengine=libaio 00:32:14.021 direct=1 00:32:14.021 bs=4096 00:32:14.021 iodepth=1 00:32:14.021 norandommap=0 00:32:14.021 numjobs=1 00:32:14.021 00:32:14.021 verify_dump=1 00:32:14.021 verify_backlog=512 00:32:14.021 verify_state_save=0 00:32:14.021 do_verify=1 00:32:14.021 verify=crc32c-intel 00:32:14.021 [job0] 00:32:14.021 filename=/dev/nvme0n1 00:32:14.021 [job1] 00:32:14.021 filename=/dev/nvme0n2 00:32:14.021 [job2] 00:32:14.021 filename=/dev/nvme0n3 00:32:14.021 [job3] 00:32:14.021 filename=/dev/nvme0n4 00:32:14.021 Could not set queue depth (nvme0n1) 00:32:14.021 Could not set queue depth (nvme0n2) 00:32:14.021 Could not set queue depth (nvme0n3) 00:32:14.021 Could not set queue depth (nvme0n4) 00:32:14.280 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.280 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.280 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.280 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:14.280 fio-3.35 00:32:14.280 Starting 4 threads 00:32:15.656 00:32:15.656 job0: (groupid=0, jobs=1): err= 0: pid=1342046: Tue Nov 19 09:34:16 2024 00:32:15.656 read: IOPS=1970, BW=7880KiB/s (8069kB/s)(7888KiB/1001msec) 00:32:15.656 slat (nsec): min=7424, max=37418, avg=9149.39, stdev=1553.77 00:32:15.656 clat (usec): min=170, max=583, avg=280.55, stdev=76.52 00:32:15.656 lat (usec): min=178, max=591, avg=289.70, stdev=76.39 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:32:15.656 | 30.00th=[ 229], 40.00th=[ 245], 50.00th=[ 260], 60.00th=[ 277], 00:32:15.656 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 383], 95.00th=[ 482], 00:32:15.656 | 99.00th=[ 515], 99.50th=[ 519], 99.90th=[ 537], 99.95th=[ 586], 00:32:15.656 | 99.99th=[ 586] 00:32:15.656 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:15.656 slat (nsec): min=10594, max=44129, avg=12653.24, stdev=2122.06 00:32:15.656 clat (usec): min=132, max=3201, avg=190.52, stdev=72.62 00:32:15.656 lat (usec): min=142, max=3216, avg=203.18, stdev=72.78 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 141], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:32:15.656 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:32:15.656 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 239], 95.00th=[ 241], 00:32:15.656 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 330], 00:32:15.656 | 99.99th=[ 3195] 00:32:15.656 bw ( KiB/s): min= 8192, max= 8192, per=30.80%, avg=8192.00, stdev= 0.00, samples=1 00:32:15.656 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:15.656 lat (usec) : 250=71.14%, 500=27.44%, 750=1.39% 00:32:15.656 lat (msec) : 4=0.02% 00:32:15.656 cpu : usr=2.70%, sys=7.60%, ctx=4024, majf=0, minf=1 00:32:15.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 issued rwts: total=1972,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.656 job1: (groupid=0, jobs=1): err= 0: pid=1342047: Tue Nov 19 09:34:16 2024 00:32:15.656 read: IOPS=1994, BW=7976KiB/s (8167kB/s)(7984KiB/1001msec) 00:32:15.656 slat (nsec): min=6720, max=27518, avg=7793.14, stdev=1190.13 00:32:15.656 clat (usec): min=173, max=588, avg=276.32, stdev=65.82 00:32:15.656 lat (usec): min=181, max=596, avg=284.11, stdev=65.80 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 194], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 239], 00:32:15.656 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:32:15.656 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 343], 95.00th=[ 461], 00:32:15.656 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 553], 99.95th=[ 586], 00:32:15.656 | 99.99th=[ 586] 00:32:15.656 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:15.656 slat (nsec): min=9312, max=38967, avg=10487.93, stdev=1327.16 00:32:15.656 clat (usec): min=133, max=409, avg=196.40, stdev=39.14 00:32:15.656 lat (usec): min=143, max=448, avg=206.89, stdev=39.24 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:32:15.656 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 192], 00:32:15.656 | 70.00th=[ 206], 80.00th=[ 225], 90.00th=[ 260], 95.00th=[ 277], 00:32:15.656 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 367], 99.95th=[ 396], 00:32:15.656 | 99.99th=[ 408] 00:32:15.656 bw ( KiB/s): min= 8192, max= 8192, per=30.80%, avg=8192.00, stdev= 0.00, samples=1 00:32:15.656 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:15.656 lat (usec) : 250=65.60%, 500=33.31%, 750=1.09% 00:32:15.656 cpu : usr=1.60%, sys=4.30%, ctx=4045, majf=0, minf=1 00:32:15.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 issued rwts: total=1996,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.656 job2: (groupid=0, jobs=1): err= 0: pid=1342048: Tue Nov 19 09:34:16 2024 00:32:15.656 read: IOPS=1327, BW=5311KiB/s (5438kB/s)(5316KiB/1001msec) 00:32:15.656 slat (nsec): min=6478, max=27289, avg=7631.30, stdev=1627.97 00:32:15.656 clat (usec): min=194, max=41276, avg=525.14, stdev=3248.96 00:32:15.656 lat (usec): min=201, max=41283, avg=532.77, stdev=3249.17 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:32:15.656 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 273], 00:32:15.656 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 330], 00:32:15.656 | 99.00th=[ 375], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:15.656 | 99.99th=[41157] 00:32:15.656 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:32:15.656 slat (nsec): min=9403, max=37945, avg=10378.72, stdev=1312.59 00:32:15.656 clat (usec): min=133, max=309, avg=175.57, stdev=20.31 00:32:15.656 lat (usec): min=143, max=319, avg=185.95, stdev=20.46 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:32:15.656 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:32:15.656 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 239], 00:32:15.656 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 310], 00:32:15.656 | 99.99th=[ 310] 00:32:15.656 bw ( KiB/s): min= 8192, max= 8192, per=30.80%, avg=8192.00, stdev= 0.00, samples=1 00:32:15.656 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:15.656 lat (usec) : 250=75.50%, 500=24.19% 00:32:15.656 lat (msec) : 50=0.31% 00:32:15.656 cpu : usr=1.30%, sys=2.80%, ctx=2866, majf=0, minf=1 00:32:15.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 issued rwts: total=1329,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.656 job3: (groupid=0, jobs=1): err= 0: pid=1342049: Tue Nov 19 09:34:16 2024 00:32:15.656 read: IOPS=635, BW=2541KiB/s (2602kB/s)(2544KiB/1001msec) 00:32:15.656 slat (nsec): min=7081, max=28830, avg=8293.50, stdev=2470.16 00:32:15.656 clat (usec): min=229, max=41543, avg=1247.89, stdev=6190.51 00:32:15.656 lat (usec): min=237, max=41551, avg=1256.18, stdev=6190.76 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 241], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:32:15.656 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 297], 00:32:15.656 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 363], 95.00th=[ 371], 00:32:15.656 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:15.656 | 99.99th=[41681] 00:32:15.656 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:32:15.656 slat (nsec): min=9592, max=39350, avg=10703.45, stdev=1208.28 00:32:15.656 clat (usec): min=155, max=372, avg=182.27, stdev=15.05 00:32:15.656 lat (usec): min=165, max=411, avg=192.98, stdev=15.47 00:32:15.656 clat percentiles (usec): 00:32:15.656 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:32:15.656 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:32:15.656 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 208], 00:32:15.656 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 297], 99.95th=[ 371], 00:32:15.656 | 99.99th=[ 371] 00:32:15.656 bw ( KiB/s): min= 4096, max= 4096, per=15.40%, avg=4096.00, stdev= 0.00, samples=1 00:32:15.656 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:15.656 lat (usec) : 250=65.84%, 500=33.19%, 750=0.06% 00:32:15.656 lat (msec) : 50=0.90% 00:32:15.656 cpu : usr=1.10%, sys=1.40%, ctx=1661, majf=0, minf=1 00:32:15.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:15.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:15.656 issued rwts: total=636,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:15.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:15.656 00:32:15.656 Run status group 0 (all jobs): 00:32:15.656 READ: bw=23.2MiB/s (24.3MB/s), 2541KiB/s-7976KiB/s (2602kB/s-8167kB/s), io=23.2MiB (24.3MB), run=1001-1001msec 00:32:15.656 WRITE: bw=26.0MiB/s (27.2MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:32:15.656 00:32:15.656 Disk stats (read/write): 00:32:15.656 nvme0n1: ios=1570/1851, merge=0/0, ticks=1316/329, in_queue=1645, util=98.90% 00:32:15.656 nvme0n2: ios=1586/1920, merge=0/0, ticks=1270/368, in_queue=1638, util=98.58% 00:32:15.656 nvme0n3: ios=1119/1536, merge=0/0, ticks=1332/269, in_queue=1601, util=97.61% 00:32:15.656 nvme0n4: ios=654/1024, merge=0/0, ticks=1607/189, in_queue=1796, util=98.53% 00:32:15.656 09:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:15.656 [global] 00:32:15.656 thread=1 00:32:15.657 invalidate=1 00:32:15.657 rw=write 00:32:15.657 time_based=1 00:32:15.657 runtime=1 00:32:15.657 ioengine=libaio 00:32:15.657 direct=1 00:32:15.657 bs=4096 00:32:15.657 iodepth=128 00:32:15.657 norandommap=0 00:32:15.657 numjobs=1 00:32:15.657 00:32:15.657 verify_dump=1 00:32:15.657 verify_backlog=512 00:32:15.657 verify_state_save=0 00:32:15.657 do_verify=1 00:32:15.657 verify=crc32c-intel 00:32:15.657 [job0] 00:32:15.657 filename=/dev/nvme0n1 00:32:15.657 [job1] 00:32:15.657 filename=/dev/nvme0n2 00:32:15.657 [job2] 00:32:15.657 filename=/dev/nvme0n3 00:32:15.657 [job3] 00:32:15.657 filename=/dev/nvme0n4 00:32:15.657 Could not set queue depth (nvme0n1) 00:32:15.657 Could not set queue depth (nvme0n2) 00:32:15.657 Could not set queue depth (nvme0n3) 00:32:15.657 Could not set queue depth (nvme0n4) 00:32:15.657 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.657 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.657 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.657 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:15.657 fio-3.35 00:32:15.657 Starting 4 threads 00:32:17.034 00:32:17.034 job0: (groupid=0, jobs=1): err= 0: pid=1342422: Tue Nov 19 09:34:17 2024 00:32:17.034 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:32:17.034 slat (nsec): min=1676, max=20577k, avg=147907.74, stdev=1077778.39 00:32:17.034 clat (usec): min=3526, max=93068, avg=16410.48, stdev=10266.01 00:32:17.034 lat (usec): min=3537, max=93076, avg=16558.39, stdev=10409.02 00:32:17.034 clat percentiles (usec): 00:32:17.034 | 1.00th=[ 5669], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[10552], 00:32:17.034 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12780], 60.00th=[13829], 00:32:17.034 | 70.00th=[17695], 80.00th=[20841], 90.00th=[24773], 95.00th=[36963], 00:32:17.034 | 99.00th=[65274], 99.50th=[84411], 99.90th=[92799], 99.95th=[92799], 00:32:17.034 | 99.99th=[92799] 00:32:17.034 write: IOPS=2688, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1014msec); 0 zone resets 00:32:17.034 slat (usec): min=2, max=12873, avg=221.20, stdev=1134.84 00:32:17.034 clat (msec): min=2, max=116, avg=31.76, stdev=27.39 00:32:17.034 lat (msec): min=2, max=116, avg=31.98, stdev=27.56 00:32:17.034 clat percentiles (msec): 00:32:17.034 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 13], 00:32:17.034 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 23], 00:32:17.034 | 70.00th=[ 35], 80.00th=[ 50], 90.00th=[ 81], 95.00th=[ 100], 00:32:17.034 | 99.00th=[ 112], 99.50th=[ 114], 99.90th=[ 116], 99.95th=[ 116], 00:32:17.034 | 99.99th=[ 116] 00:32:17.034 bw ( KiB/s): min= 7112, max=13680, per=16.75%, avg=10396.00, stdev=4644.28, samples=2 00:32:17.034 iops : min= 1778, max= 3420, avg=2599.00, stdev=1161.07, samples=2 00:32:17.034 lat (msec) : 4=0.45%, 10=11.73%, 20=51.17%, 50=25.71%, 100=9.06% 00:32:17.034 lat (msec) : 250=1.87% 00:32:17.034 cpu : usr=1.97%, sys=4.44%, ctx=266, majf=0, minf=1 00:32:17.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:17.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.034 issued rwts: total=2560,2726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.034 job1: (groupid=0, jobs=1): err= 0: pid=1342423: Tue Nov 19 09:34:17 2024 00:32:17.034 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:32:17.034 slat (nsec): min=1738, max=17218k, avg=111498.03, stdev=893262.51 00:32:17.034 clat (usec): min=5846, max=48370, avg=13046.74, stdev=6478.36 00:32:17.034 lat (usec): min=5853, max=48375, avg=13158.24, stdev=6573.88 00:32:17.034 clat percentiles (usec): 00:32:17.034 | 1.00th=[ 7570], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:32:17.034 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11076], 60.00th=[11469], 00:32:17.034 | 70.00th=[11731], 80.00th=[13698], 90.00th=[20317], 95.00th=[27657], 00:32:17.034 | 99.00th=[39584], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:32:17.034 | 99.99th=[48497] 00:32:17.034 write: IOPS=4297, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1014msec); 0 zone resets 00:32:17.034 slat (usec): min=2, max=22638, avg=118.46, stdev=858.01 00:32:17.034 clat (usec): min=1258, max=74724, avg=16847.39, stdev=13295.43 00:32:17.034 lat (usec): min=1302, max=74731, avg=16965.85, stdev=13375.36 00:32:17.034 clat percentiles (usec): 00:32:17.034 | 1.00th=[ 4883], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8455], 00:32:17.034 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10945], 60.00th=[14353], 00:32:17.034 | 70.00th=[16581], 80.00th=[20841], 90.00th=[38536], 95.00th=[49021], 00:32:17.034 | 99.00th=[63701], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:32:17.034 | 99.99th=[74974] 00:32:17.034 bw ( KiB/s): min=16592, max=17256, per=27.27%, avg=16924.00, stdev=469.52, samples=2 00:32:17.034 iops : min= 4148, max= 4314, avg=4231.00, stdev=117.38, samples=2 00:32:17.034 lat (msec) : 2=0.02%, 4=0.07%, 10=35.23%, 20=48.71%, 50=13.53% 00:32:17.034 lat (msec) : 100=2.44% 00:32:17.034 cpu : usr=3.16%, sys=5.73%, ctx=264, majf=0, minf=1 00:32:17.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:17.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.034 issued rwts: total=4096,4358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.035 job2: (groupid=0, jobs=1): err= 0: pid=1342425: Tue Nov 19 09:34:17 2024 00:32:17.035 read: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.8MiB/1043msec) 00:32:17.035 slat (nsec): min=1059, max=29090k, avg=204357.96, stdev=1563367.97 00:32:17.035 clat (usec): min=7230, max=77410, avg=25965.18, stdev=15927.63 00:32:17.035 lat (usec): min=7237, max=77416, avg=26169.54, stdev=16009.22 00:32:17.035 clat percentiles (usec): 00:32:17.035 | 1.00th=[ 8356], 5.00th=[10552], 10.00th=[11207], 20.00th=[11863], 00:32:17.035 | 30.00th=[15270], 40.00th=[16909], 50.00th=[20841], 60.00th=[25297], 00:32:17.035 | 70.00th=[28967], 80.00th=[38011], 90.00th=[50594], 95.00th=[56886], 00:32:17.035 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:32:17.035 | 99.99th=[77071] 00:32:17.035 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:32:17.035 slat (usec): min=2, max=22553, avg=134.89, stdev=1043.82 00:32:17.035 clat (usec): min=6391, max=73159, avg=19711.29, stdev=12082.56 00:32:17.035 lat (usec): min=6400, max=73167, avg=19846.18, stdev=12158.80 00:32:17.035 clat percentiles (usec): 00:32:17.035 | 1.00th=[ 7767], 5.00th=[10028], 10.00th=[11076], 20.00th=[11469], 00:32:17.035 | 30.00th=[11731], 40.00th=[11994], 50.00th=[14877], 60.00th=[18482], 00:32:17.035 | 70.00th=[20317], 80.00th=[26346], 90.00th=[37487], 95.00th=[49546], 00:32:17.035 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[67634], 00:32:17.035 | 99.99th=[72877] 00:32:17.035 bw ( KiB/s): min=11000, max=13576, per=19.80%, avg=12288.00, stdev=1821.51, samples=2 00:32:17.035 iops : min= 2750, max= 3394, avg=3072.00, stdev=455.38, samples=2 00:32:17.035 lat (msec) : 10=3.27%, 20=56.78%, 50=32.35%, 100=7.59% 00:32:17.035 cpu : usr=1.63%, sys=3.26%, ctx=270, majf=0, minf=1 00:32:17.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:32:17.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.035 issued rwts: total=2761,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.035 job3: (groupid=0, jobs=1): err= 0: pid=1342426: Tue Nov 19 09:34:17 2024 00:32:17.035 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:32:17.035 slat (nsec): min=1119, max=11763k, avg=88093.30, stdev=712611.49 00:32:17.035 clat (usec): min=4152, max=30176, avg=11735.31, stdev=3757.79 00:32:17.035 lat (usec): min=4160, max=30181, avg=11823.41, stdev=3804.47 00:32:17.035 clat percentiles (usec): 00:32:17.035 | 1.00th=[ 5735], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[ 8717], 00:32:17.035 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11994], 00:32:17.035 | 70.00th=[13173], 80.00th=[14484], 90.00th=[16909], 95.00th=[18482], 00:32:17.035 | 99.00th=[23987], 99.50th=[26608], 99.90th=[28967], 99.95th=[30278], 00:32:17.035 | 99.99th=[30278] 00:32:17.035 write: IOPS=5969, BW=23.3MiB/s (24.5MB/s)(23.6MiB/1010msec); 0 zone resets 00:32:17.035 slat (nsec): min=1901, max=10847k, avg=77847.19, stdev=538716.12 00:32:17.035 clat (usec): min=1208, max=30156, avg=10282.69, stdev=3443.75 00:32:17.035 lat (usec): min=1219, max=30160, avg=10360.54, stdev=3477.21 00:32:17.035 clat percentiles (usec): 00:32:17.035 | 1.00th=[ 4621], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7373], 00:32:17.035 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10552], 00:32:17.035 | 70.00th=[11863], 80.00th=[12387], 90.00th=[15008], 95.00th=[18482], 00:32:17.035 | 99.00th=[19006], 99.50th=[20317], 99.90th=[21890], 99.95th=[21890], 00:32:17.035 | 99.99th=[30278] 00:32:17.035 bw ( KiB/s): min=23488, max=23728, per=38.03%, avg=23608.00, stdev=169.71, samples=2 00:32:17.035 iops : min= 5872, max= 5932, avg=5902.00, stdev=42.43, samples=2 00:32:17.035 lat (msec) : 2=0.02%, 4=0.19%, 10=49.26%, 20=48.61%, 50=1.93% 00:32:17.035 cpu : usr=3.87%, sys=6.44%, ctx=458, majf=0, minf=1 00:32:17.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:17.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.035 issued rwts: total=5632,6029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.035 00:32:17.035 Run status group 0 (all jobs): 00:32:17.035 READ: bw=56.4MiB/s (59.1MB/s), 9.86MiB/s-21.8MiB/s (10.3MB/s-22.8MB/s), io=58.8MiB (61.6MB), run=1010-1043msec 00:32:17.035 WRITE: bw=60.6MiB/s (63.6MB/s), 10.5MiB/s-23.3MiB/s (11.0MB/s-24.5MB/s), io=63.2MiB (66.3MB), run=1010-1043msec 00:32:17.035 00:32:17.035 Disk stats (read/write): 00:32:17.035 nvme0n1: ios=2083/2359, merge=0/0, ticks=31935/62025, in_queue=93960, util=99.00% 00:32:17.035 nvme0n2: ios=3097/3503, merge=0/0, ticks=40624/55210, in_queue=95834, util=97.33% 00:32:17.035 nvme0n3: ios=2070/2560, merge=0/0, ticks=25482/25119, in_queue=50601, util=97.28% 00:32:17.035 nvme0n4: ios=4435/4608, merge=0/0, ticks=51764/46058, in_queue=97822, util=97.01% 00:32:17.035 09:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:17.035 [global] 00:32:17.035 thread=1 00:32:17.035 invalidate=1 00:32:17.035 rw=randwrite 00:32:17.035 time_based=1 00:32:17.035 runtime=1 00:32:17.035 ioengine=libaio 00:32:17.035 direct=1 00:32:17.035 bs=4096 00:32:17.035 iodepth=128 00:32:17.035 norandommap=0 00:32:17.035 numjobs=1 00:32:17.035 00:32:17.035 verify_dump=1 00:32:17.035 verify_backlog=512 00:32:17.035 verify_state_save=0 00:32:17.035 do_verify=1 00:32:17.035 verify=crc32c-intel 00:32:17.035 [job0] 00:32:17.035 filename=/dev/nvme0n1 00:32:17.035 [job1] 00:32:17.035 filename=/dev/nvme0n2 00:32:17.035 [job2] 00:32:17.035 filename=/dev/nvme0n3 00:32:17.035 [job3] 00:32:17.035 filename=/dev/nvme0n4 00:32:17.035 Could not set queue depth (nvme0n1) 00:32:17.035 Could not set queue depth (nvme0n2) 00:32:17.035 Could not set queue depth (nvme0n3) 00:32:17.035 Could not set queue depth (nvme0n4) 00:32:17.293 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.293 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.293 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.293 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:17.293 fio-3.35 00:32:17.293 Starting 4 threads 00:32:18.670 00:32:18.670 job0: (groupid=0, jobs=1): err= 0: pid=1342794: Tue Nov 19 09:34:19 2024 00:32:18.670 read: IOPS=3550, BW=13.9MiB/s (14.5MB/s)(14.5MiB/1045msec) 00:32:18.670 slat (nsec): min=1143, max=38648k, avg=139677.88, stdev=1240304.93 00:32:18.670 clat (msec): min=3, max=124, avg=18.08, stdev=14.84 00:32:18.670 lat (msec): min=3, max=124, avg=18.22, stdev=14.99 00:32:18.670 clat percentiles (msec): 00:32:18.670 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:32:18.670 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:32:18.670 | 70.00th=[ 17], 80.00th=[ 25], 90.00th=[ 35], 95.00th=[ 42], 00:32:18.670 | 99.00th=[ 86], 99.50th=[ 103], 99.90th=[ 125], 99.95th=[ 125], 00:32:18.670 | 99.99th=[ 125] 00:32:18.670 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:32:18.670 slat (nsec): min=1830, max=21303k, avg=110788.55, stdev=866042.67 00:32:18.670 clat (usec): min=550, max=124585, avg=15950.39, stdev=17572.24 00:32:18.670 lat (usec): min=1198, max=124599, avg=16061.18, stdev=17676.42 00:32:18.670 clat percentiles (msec): 00:32:18.670 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:32:18.670 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:32:18.670 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 22], 95.00th=[ 39], 00:32:18.670 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:32:18.670 | 99.99th=[ 125] 00:32:18.670 bw ( KiB/s): min=16368, max=16384, per=24.35%, avg=16376.00, stdev=11.31, samples=2 00:32:18.670 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:32:18.670 lat (usec) : 750=0.01% 00:32:18.670 lat (msec) : 2=0.03%, 4=1.09%, 10=28.44%, 20=48.30%, 50=18.23% 00:32:18.670 lat (msec) : 100=2.43%, 250=1.47% 00:32:18.670 cpu : usr=3.64%, sys=3.74%, ctx=226, majf=0, minf=2 00:32:18.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:18.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.670 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.670 job1: (groupid=0, jobs=1): err= 0: pid=1342795: Tue Nov 19 09:34:19 2024 00:32:18.670 read: IOPS=6028, BW=23.5MiB/s (24.7MB/s)(23.6MiB/1002msec) 00:32:18.670 slat (nsec): min=1028, max=5272.5k, avg=77028.81, stdev=423376.85 00:32:18.670 clat (usec): min=582, max=19521, avg=9771.28, stdev=1885.51 00:32:18.670 lat (usec): min=2599, max=19539, avg=9848.31, stdev=1910.34 00:32:18.670 clat percentiles (usec): 00:32:18.670 | 1.00th=[ 6194], 5.00th=[ 7439], 10.00th=[ 8029], 20.00th=[ 8225], 00:32:18.670 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10159], 00:32:18.670 | 70.00th=[10421], 80.00th=[10945], 90.00th=[12125], 95.00th=[12911], 00:32:18.670 | 99.00th=[15533], 99.50th=[16909], 99.90th=[17433], 99.95th=[17433], 00:32:18.670 | 99.99th=[19530] 00:32:18.670 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:32:18.670 slat (nsec): min=1732, max=24641k, avg=82655.54, stdev=554523.15 00:32:18.670 clat (usec): min=5575, max=57825, avg=11046.27, stdev=6611.88 00:32:18.670 lat (usec): min=5590, max=57835, avg=11128.93, stdev=6657.19 00:32:18.670 clat percentiles (usec): 00:32:18.670 | 1.00th=[ 6849], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8291], 00:32:18.670 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:32:18.670 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11994], 95.00th=[17957], 00:32:18.670 | 99.00th=[52691], 99.50th=[55313], 99.90th=[57934], 99.95th=[57934], 00:32:18.670 | 99.99th=[57934] 00:32:18.670 bw ( KiB/s): min=24576, max=24576, per=36.55%, avg=24576.00, stdev= 0.00, samples=2 00:32:18.670 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:32:18.670 lat (usec) : 750=0.01% 00:32:18.670 lat (msec) : 4=0.30%, 10=52.75%, 20=44.78%, 50=1.48%, 100=0.68% 00:32:18.670 cpu : usr=2.80%, sys=6.29%, ctx=655, majf=0, minf=1 00:32:18.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:18.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.670 issued rwts: total=6041,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.670 job2: (groupid=0, jobs=1): err= 0: pid=1342796: Tue Nov 19 09:34:19 2024 00:32:18.670 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:32:18.670 slat (nsec): min=1177, max=26582k, avg=129801.97, stdev=1170380.29 00:32:18.670 clat (usec): min=4082, max=50801, avg=18016.64, stdev=7295.64 00:32:18.670 lat (usec): min=4108, max=50844, avg=18146.45, stdev=7384.40 00:32:18.670 clat percentiles (usec): 00:32:18.670 | 1.00th=[ 5932], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[11469], 00:32:18.670 | 30.00th=[12256], 40.00th=[13042], 50.00th=[15401], 60.00th=[19006], 00:32:18.670 | 70.00th=[22938], 80.00th=[24773], 90.00th=[27919], 95.00th=[30540], 00:32:18.670 | 99.00th=[35914], 99.50th=[35914], 99.90th=[39584], 99.95th=[43779], 00:32:18.670 | 99.99th=[50594] 00:32:18.670 write: IOPS=3217, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1009msec); 0 zone resets 00:32:18.670 slat (usec): min=2, max=26256, avg=158.26, stdev=1221.59 00:32:18.670 clat (usec): min=661, max=114216, avg=22289.81, stdev=20784.18 00:32:18.670 lat (usec): min=668, max=114224, avg=22448.07, stdev=20942.97 00:32:18.670 clat percentiles (msec): 00:32:18.670 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:32:18.670 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 17], 60.00th=[ 20], 00:32:18.670 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 35], 95.00th=[ 80], 00:32:18.670 | 99.00th=[ 106], 99.50th=[ 111], 99.90th=[ 114], 99.95th=[ 114], 00:32:18.670 | 99.99th=[ 114] 00:32:18.670 bw ( KiB/s): min=11080, max=13872, per=18.55%, avg=12476.00, stdev=1974.24, samples=2 00:32:18.670 iops : min= 2770, max= 3468, avg=3119.00, stdev=493.56, samples=2 00:32:18.670 lat (usec) : 750=0.08% 00:32:18.670 lat (msec) : 2=0.19%, 4=0.46%, 10=10.84%, 20=49.84%, 50=34.66% 00:32:18.670 lat (msec) : 100=2.53%, 250=1.39% 00:32:18.670 cpu : usr=1.88%, sys=3.67%, ctx=190, majf=0, minf=2 00:32:18.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:18.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.670 issued rwts: total=3072,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.670 job3: (groupid=0, jobs=1): err= 0: pid=1342797: Tue Nov 19 09:34:19 2024 00:32:18.670 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:32:18.670 slat (nsec): min=1089, max=26559k, avg=141206.82, stdev=1122904.47 00:32:18.670 clat (usec): min=5343, max=76012, avg=17627.91, stdev=10124.04 00:32:18.670 lat (usec): min=5408, max=76017, avg=17769.11, stdev=10195.23 00:32:18.670 clat percentiles (usec): 00:32:18.670 | 1.00th=[ 7439], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:32:18.670 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[13566], 00:32:18.670 | 70.00th=[19268], 80.00th=[25297], 90.00th=[29492], 95.00th=[34341], 00:32:18.670 | 99.00th=[69731], 99.50th=[70779], 99.90th=[76022], 99.95th=[76022], 00:32:18.670 | 99.99th=[76022] 00:32:18.670 write: IOPS=4068, BW=15.9MiB/s (16.7MB/s)(15.9MiB/1003msec); 0 zone resets 00:32:18.670 slat (nsec): min=1909, max=22945k, avg=115932.33, stdev=898606.74 00:32:18.670 clat (usec): min=1945, max=85243, avg=15511.68, stdev=9204.13 00:32:18.670 lat (usec): min=1961, max=85247, avg=15627.61, stdev=9271.12 00:32:18.670 clat percentiles (usec): 00:32:18.670 | 1.00th=[ 4424], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11076], 00:32:18.671 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[12911], 00:32:18.671 | 70.00th=[13698], 80.00th=[21365], 90.00th=[25035], 95.00th=[27657], 00:32:18.671 | 99.00th=[62653], 99.50th=[80217], 99.90th=[85459], 99.95th=[85459], 00:32:18.671 | 99.99th=[85459] 00:32:18.671 bw ( KiB/s): min=11960, max=19672, per=23.52%, avg=15816.00, stdev=5453.21, samples=2 00:32:18.671 iops : min= 2990, max= 4918, avg=3954.00, stdev=1363.30, samples=2 00:32:18.671 lat (msec) : 2=0.09%, 4=0.34%, 10=6.95%, 20=68.82%, 50=22.14% 00:32:18.671 lat (msec) : 100=1.66% 00:32:18.671 cpu : usr=2.10%, sys=4.29%, ctx=371, majf=0, minf=1 00:32:18.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:18.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:18.671 issued rwts: total=3584,4081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:18.671 00:32:18.671 Run status group 0 (all jobs): 00:32:18.671 READ: bw=61.3MiB/s (64.3MB/s), 11.9MiB/s-23.5MiB/s (12.5MB/s-24.7MB/s), io=64.1MiB (67.2MB), run=1002-1045msec 00:32:18.671 WRITE: bw=65.7MiB/s (68.9MB/s), 12.6MiB/s-24.0MiB/s (13.2MB/s-25.1MB/s), io=68.6MiB (72.0MB), run=1002-1045msec 00:32:18.671 00:32:18.671 Disk stats (read/write): 00:32:18.671 nvme0n1: ios=3616/3671, merge=0/0, ticks=51644/33633, in_queue=85277, util=98.00% 00:32:18.671 nvme0n2: ios=4608/5117, merge=0/0, ticks=14640/17502, in_queue=32142, util=82.92% 00:32:18.671 nvme0n3: ios=2048/2358, merge=0/0, ticks=38831/57097, in_queue=95928, util=87.51% 00:32:18.671 nvme0n4: ios=2599/3037, merge=0/0, ticks=31693/29973, in_queue=61666, util=95.91% 00:32:18.671 09:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:18.671 09:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1343029 00:32:18.671 09:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:18.671 09:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:18.671 [global] 00:32:18.671 thread=1 00:32:18.671 invalidate=1 00:32:18.671 rw=read 00:32:18.671 time_based=1 00:32:18.671 runtime=10 00:32:18.671 ioengine=libaio 00:32:18.671 direct=1 00:32:18.671 bs=4096 00:32:18.671 iodepth=1 00:32:18.671 norandommap=1 00:32:18.671 numjobs=1 00:32:18.671 00:32:18.671 [job0] 00:32:18.671 filename=/dev/nvme0n1 00:32:18.671 [job1] 00:32:18.671 filename=/dev/nvme0n2 00:32:18.671 [job2] 00:32:18.671 filename=/dev/nvme0n3 00:32:18.671 [job3] 00:32:18.671 filename=/dev/nvme0n4 00:32:18.671 Could not set queue depth (nvme0n1) 00:32:18.671 Could not set queue depth (nvme0n2) 00:32:18.671 Could not set queue depth (nvme0n3) 00:32:18.671 Could not set queue depth (nvme0n4) 00:32:18.929 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.929 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.929 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.929 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:18.929 fio-3.35 00:32:18.929 Starting 4 threads 00:32:22.208 09:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:22.208 09:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:22.208 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=266240, buflen=4096 00:32:22.208 fio: pid=1343171, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.208 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46268416, buflen=4096 00:32:22.208 fio: pid=1343170, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.208 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.208 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:22.208 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.208 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:22.208 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=786432, buflen=4096 00:32:22.208 fio: pid=1343167, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.467 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54288384, buflen=4096 00:32:22.467 fio: pid=1343169, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:22.467 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.467 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:22.467 00:32:22.467 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1343167: Tue Nov 19 09:34:23 2024 00:32:22.467 read: IOPS=62, BW=247KiB/s (253kB/s)(768KiB/3112msec) 00:32:22.467 slat (usec): min=5, max=15649, avg=169.95, stdev=1537.92 00:32:22.467 clat (usec): min=179, max=41807, avg=15924.99, stdev=19899.49 00:32:22.467 lat (usec): min=185, max=56890, avg=16095.67, stdev=20010.02 00:32:22.467 clat percentiles (usec): 00:32:22.467 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 188], 00:32:22.467 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 245], 60.00th=[ 285], 00:32:22.467 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:22.467 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:22.467 | 99.99th=[41681] 00:32:22.467 bw ( KiB/s): min= 96, max= 942, per=0.79%, avg=238.33, stdev=344.74, samples=6 00:32:22.467 iops : min= 24, max= 235, avg=59.50, stdev=85.98, samples=6 00:32:22.467 lat (usec) : 250=55.44%, 500=5.70% 00:32:22.467 lat (msec) : 50=38.34% 00:32:22.467 cpu : usr=0.00%, sys=0.13%, ctx=196, majf=0, minf=1 00:32:22.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 issued rwts: total=193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.467 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1343169: Tue Nov 19 09:34:23 2024 00:32:22.467 read: IOPS=4030, BW=15.7MiB/s (16.5MB/s)(51.8MiB/3289msec) 00:32:22.467 slat (usec): min=6, max=14731, avg=10.37, stdev=189.34 00:32:22.467 clat (usec): min=168, max=9221, avg=235.15, stdev=94.40 00:32:22.467 lat (usec): min=178, max=14998, avg=245.52, stdev=213.01 00:32:22.467 clat percentiles (usec): 00:32:22.467 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 212], 00:32:22.467 | 30.00th=[ 219], 40.00th=[ 229], 50.00th=[ 245], 60.00th=[ 249], 00:32:22.467 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 260], 00:32:22.467 | 99.00th=[ 273], 99.50th=[ 334], 99.90th=[ 478], 99.95th=[ 502], 00:32:22.467 | 99.99th=[ 5407] 00:32:22.467 bw ( KiB/s): min=15256, max=17664, per=53.21%, avg=16052.33, stdev=986.89, samples=6 00:32:22.467 iops : min= 3814, max= 4416, avg=4013.00, stdev=246.64, samples=6 00:32:22.467 lat (usec) : 250=69.78%, 500=30.16%, 750=0.02%, 1000=0.01% 00:32:22.467 lat (msec) : 2=0.01%, 10=0.02% 00:32:22.467 cpu : usr=1.00%, sys=3.56%, ctx=13259, majf=0, minf=2 00:32:22.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 issued rwts: total=13255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.467 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1343170: Tue Nov 19 09:34:23 2024 00:32:22.467 read: IOPS=3928, BW=15.3MiB/s (16.1MB/s)(44.1MiB/2876msec) 00:32:22.467 slat (nsec): min=6405, max=30033, avg=7355.52, stdev=897.21 00:32:22.467 clat (usec): min=194, max=564, avg=244.40, stdev=16.56 00:32:22.467 lat (usec): min=201, max=594, avg=251.76, stdev=16.58 00:32:22.467 clat percentiles (usec): 00:32:22.467 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 235], 00:32:22.467 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 249], 00:32:22.467 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 260], 00:32:22.467 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 424], 99.95th=[ 498], 00:32:22.467 | 99.99th=[ 545] 00:32:22.467 bw ( KiB/s): min=15480, max=17392, per=52.67%, avg=15889.60, stdev=841.63, samples=5 00:32:22.467 iops : min= 3870, max= 4348, avg=3972.40, stdev=210.41, samples=5 00:32:22.467 lat (usec) : 250=61.95%, 500=38.01%, 750=0.04% 00:32:22.467 cpu : usr=1.22%, sys=3.27%, ctx=11297, majf=0, minf=2 00:32:22.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 issued rwts: total=11297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.467 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1343171: Tue Nov 19 09:34:23 2024 00:32:22.467 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(260KiB/2686msec) 00:32:22.467 slat (nsec): min=9858, max=36807, avg=15209.45, stdev=5902.71 00:32:22.467 clat (usec): min=40914, max=41307, avg=40986.34, stdev=50.61 00:32:22.467 lat (usec): min=40936, max=41343, avg=41001.62, stdev=52.34 00:32:22.467 clat percentiles (usec): 00:32:22.467 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:22.467 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:22.467 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:22.467 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:22.467 | 99.99th=[41157] 00:32:22.467 bw ( KiB/s): min= 96, max= 104, per=0.32%, avg=97.60, stdev= 3.58, samples=5 00:32:22.467 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:32:22.467 lat (msec) : 50=98.48% 00:32:22.467 cpu : usr=0.07%, sys=0.00%, ctx=67, majf=0, minf=2 00:32:22.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:22.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.467 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:22.467 00:32:22.467 Run status group 0 (all jobs): 00:32:22.467 READ: bw=29.5MiB/s (30.9MB/s), 96.8KiB/s-15.7MiB/s (99.1kB/s-16.5MB/s), io=96.9MiB (102MB), run=2686-3289msec 00:32:22.467 00:32:22.467 Disk stats (read/write): 00:32:22.467 nvme0n1: ios=191/0, merge=0/0, ticks=3018/0, in_queue=3018, util=93.28% 00:32:22.467 nvme0n2: ios=12259/0, merge=0/0, ticks=2874/0, in_queue=2874, util=93.66% 00:32:22.467 nvme0n3: ios=11102/0, merge=0/0, ticks=2655/0, in_queue=2655, util=96.10% 00:32:22.467 nvme0n4: ios=97/0, merge=0/0, ticks=3177/0, in_queue=3177, util=99.77% 00:32:22.726 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.726 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:22.984 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:22.984 09:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:23.241 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:23.241 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:23.242 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:23.242 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:23.500 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:23.500 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1343029 00:32:23.500 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:23.500 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:23.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:23.500 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:23.500 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:23.758 nvmf hotplug test: fio failed as expected 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.758 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.017 rmmod nvme_tcp 00:32:24.017 rmmod nvme_fabrics 00:32:24.017 rmmod nvme_keyring 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1340387 ']' 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1340387 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1340387 ']' 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1340387 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1340387 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1340387' 00:32:24.017 killing process with pid 1340387 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1340387 00:32:24.017 09:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1340387 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.276 09:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.180 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.180 00:32:26.180 real 0m25.941s 00:32:26.180 user 1m31.182s 00:32:26.180 sys 0m11.232s 00:32:26.180 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:26.181 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:26.181 ************************************ 00:32:26.181 END TEST nvmf_fio_target 00:32:26.181 ************************************ 00:32:26.181 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:26.181 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:26.181 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:26.181 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:26.441 ************************************ 00:32:26.441 START TEST nvmf_bdevio 00:32:26.441 ************************************ 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:26.441 * Looking for test storage... 00:32:26.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.441 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:26.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.442 --rc genhtml_branch_coverage=1 00:32:26.442 --rc genhtml_function_coverage=1 00:32:26.442 --rc genhtml_legend=1 00:32:26.442 --rc geninfo_all_blocks=1 00:32:26.442 --rc geninfo_unexecuted_blocks=1 00:32:26.442 00:32:26.442 ' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:26.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.442 --rc genhtml_branch_coverage=1 00:32:26.442 --rc genhtml_function_coverage=1 00:32:26.442 --rc genhtml_legend=1 00:32:26.442 --rc geninfo_all_blocks=1 00:32:26.442 --rc geninfo_unexecuted_blocks=1 00:32:26.442 00:32:26.442 ' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:26.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.442 --rc genhtml_branch_coverage=1 00:32:26.442 --rc genhtml_function_coverage=1 00:32:26.442 --rc genhtml_legend=1 00:32:26.442 --rc geninfo_all_blocks=1 00:32:26.442 --rc geninfo_unexecuted_blocks=1 00:32:26.442 00:32:26.442 ' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:26.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.442 --rc genhtml_branch_coverage=1 00:32:26.442 --rc genhtml_function_coverage=1 00:32:26.442 --rc genhtml_legend=1 00:32:26.442 --rc geninfo_all_blocks=1 00:32:26.442 --rc geninfo_unexecuted_blocks=1 00:32:26.442 00:32:26.442 ' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:26.442 09:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:33.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:33.012 09:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:33.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:33.012 Found net devices under 0000:86:00.0: cvl_0_0 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.012 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:33.013 Found net devices under 0000:86:00.1: cvl_0_1 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:32:33.013 00:32:33.013 --- 10.0.0.2 ping statistics --- 00:32:33.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.013 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:32:33.013 00:32:33.013 --- 10.0.0.1 ping statistics --- 00:32:33.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.013 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1347406 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1347406 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1347406 ']' 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.013 [2024-11-19 09:34:33.347476] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:33.013 [2024-11-19 09:34:33.348482] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:32:33.013 [2024-11-19 09:34:33.348521] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.013 [2024-11-19 09:34:33.426810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:33.013 [2024-11-19 09:34:33.470299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.013 [2024-11-19 09:34:33.470337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.013 [2024-11-19 09:34:33.470344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.013 [2024-11-19 09:34:33.470351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.013 [2024-11-19 09:34:33.470356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.013 [2024-11-19 09:34:33.471880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:33.013 [2024-11-19 09:34:33.471986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:33.013 [2024-11-19 09:34:33.472100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:33.013 [2024-11-19 09:34:33.472101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:33.013 [2024-11-19 09:34:33.540259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:33.013 [2024-11-19 09:34:33.541013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:33.013 [2024-11-19 09:34:33.541308] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:33.013 [2024-11-19 09:34:33.541709] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:33.013 [2024-11-19 09:34:33.541748] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.013 [2024-11-19 09:34:33.608883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.013 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.013 Malloc0 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:33.014 [2024-11-19 09:34:33.697103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:33.014 { 00:32:33.014 "params": { 00:32:33.014 "name": "Nvme$subsystem", 00:32:33.014 "trtype": "$TEST_TRANSPORT", 00:32:33.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:33.014 "adrfam": "ipv4", 00:32:33.014 "trsvcid": "$NVMF_PORT", 00:32:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:33.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:33.014 "hdgst": ${hdgst:-false}, 00:32:33.014 "ddgst": ${ddgst:-false} 00:32:33.014 }, 00:32:33.014 "method": "bdev_nvme_attach_controller" 00:32:33.014 } 00:32:33.014 EOF 00:32:33.014 )") 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:33.014 09:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:33.014 "params": { 00:32:33.014 "name": "Nvme1", 00:32:33.014 "trtype": "tcp", 00:32:33.014 "traddr": "10.0.0.2", 00:32:33.014 "adrfam": "ipv4", 00:32:33.014 "trsvcid": "4420", 00:32:33.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:33.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:33.014 "hdgst": false, 00:32:33.014 "ddgst": false 00:32:33.014 }, 00:32:33.014 "method": "bdev_nvme_attach_controller" 00:32:33.014 }' 00:32:33.014 [2024-11-19 09:34:33.748112] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:32:33.014 [2024-11-19 09:34:33.748161] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1347547 ] 00:32:33.014 [2024-11-19 09:34:33.822735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:33.014 [2024-11-19 09:34:33.867390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.014 [2024-11-19 09:34:33.867497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.014 [2024-11-19 09:34:33.867498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:33.272 I/O targets: 00:32:33.272 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:33.272 00:32:33.272 00:32:33.272 CUnit - A unit testing framework for C - Version 2.1-3 00:32:33.272 http://cunit.sourceforge.net/ 00:32:33.272 00:32:33.272 00:32:33.272 Suite: bdevio tests on: Nvme1n1 00:32:33.272 Test: blockdev write read block ...passed 00:32:33.272 Test: blockdev write zeroes read block ...passed 00:32:33.272 Test: blockdev write zeroes read no split ...passed 00:32:33.272 Test: blockdev write zeroes read split ...passed 00:32:33.272 Test: blockdev write zeroes read split partial ...passed 00:32:33.272 Test: blockdev reset ...[2024-11-19 09:34:34.290571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:33.272 [2024-11-19 09:34:34.290633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ec340 (9): Bad file descriptor 00:32:33.530 [2024-11-19 09:34:34.424907] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:33.530 passed 00:32:33.530 Test: blockdev write read 8 blocks ...passed 00:32:33.530 Test: blockdev write read size > 128k ...passed 00:32:33.530 Test: blockdev write read invalid size ...passed 00:32:33.530 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:33.530 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:33.530 Test: blockdev write read max offset ...passed 00:32:33.789 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:33.789 Test: blockdev writev readv 8 blocks ...passed 00:32:33.789 Test: blockdev writev readv 30 x 1block ...passed 00:32:33.789 Test: blockdev writev readv block ...passed 00:32:33.789 Test: blockdev writev readv size > 128k ...passed 00:32:33.789 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:33.789 Test: blockdev comparev and writev ...[2024-11-19 09:34:34.635898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.635924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.635939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.635952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.636253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.636263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.636275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.636282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.636563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.636574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.636585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.636593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.636880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.636891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.636902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:33.789 [2024-11-19 09:34:34.636910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:33.789 passed 00:32:33.789 Test: blockdev nvme passthru rw ...passed 00:32:33.789 Test: blockdev nvme passthru vendor specific ...[2024-11-19 09:34:34.719349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.789 [2024-11-19 09:34:34.719365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.719478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.789 [2024-11-19 09:34:34.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.719593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.789 [2024-11-19 09:34:34.719602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:33.789 [2024-11-19 09:34:34.719713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:33.789 [2024-11-19 09:34:34.719726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:33.789 passed 00:32:33.789 Test: blockdev nvme admin passthru ...passed 00:32:33.789 Test: blockdev copy ...passed 00:32:33.789 00:32:33.789 Run Summary: Type Total Ran Passed Failed Inactive 00:32:33.789 suites 1 1 n/a 0 0 00:32:33.789 tests 23 23 23 0 0 00:32:33.789 asserts 152 152 152 0 n/a 00:32:33.789 00:32:33.789 Elapsed time = 1.353 seconds 00:32:34.047 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:34.047 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:34.048 rmmod nvme_tcp 00:32:34.048 rmmod nvme_fabrics 00:32:34.048 rmmod nvme_keyring 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1347406 ']' 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1347406 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1347406 ']' 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1347406 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.048 09:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1347406 00:32:34.048 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:32:34.048 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:32:34.048 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1347406' 00:32:34.048 killing process with pid 1347406 00:32:34.048 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1347406 00:32:34.048 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1347406 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.308 09:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.849 09:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.849 00:32:36.849 real 0m10.041s 00:32:36.849 user 0m9.553s 00:32:36.849 sys 0m5.205s 00:32:36.849 09:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:36.849 09:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 ************************************ 00:32:36.849 END TEST nvmf_bdevio 00:32:36.849 ************************************ 00:32:36.849 09:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:36.849 00:32:36.849 real 4m32.862s 00:32:36.849 user 9m6.631s 00:32:36.849 sys 1m51.539s 00:32:36.849 09:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:36.849 09:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 ************************************ 00:32:36.849 END TEST nvmf_target_core_interrupt_mode 00:32:36.849 ************************************ 00:32:36.849 09:34:37 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:36.849 09:34:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:36.849 09:34:37 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:36.849 09:34:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.849 ************************************ 00:32:36.849 START TEST nvmf_interrupt 00:32:36.849 ************************************ 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:36.849 * Looking for test storage... 00:32:36.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.849 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:36.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.850 --rc genhtml_branch_coverage=1 00:32:36.850 --rc genhtml_function_coverage=1 00:32:36.850 --rc genhtml_legend=1 00:32:36.850 --rc geninfo_all_blocks=1 00:32:36.850 --rc geninfo_unexecuted_blocks=1 00:32:36.850 00:32:36.850 ' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:36.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.850 --rc genhtml_branch_coverage=1 00:32:36.850 --rc genhtml_function_coverage=1 00:32:36.850 --rc genhtml_legend=1 00:32:36.850 --rc geninfo_all_blocks=1 00:32:36.850 --rc geninfo_unexecuted_blocks=1 00:32:36.850 00:32:36.850 ' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:36.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.850 --rc genhtml_branch_coverage=1 00:32:36.850 --rc genhtml_function_coverage=1 00:32:36.850 --rc genhtml_legend=1 00:32:36.850 --rc geninfo_all_blocks=1 00:32:36.850 --rc geninfo_unexecuted_blocks=1 00:32:36.850 00:32:36.850 ' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:36.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.850 --rc genhtml_branch_coverage=1 00:32:36.850 --rc genhtml_function_coverage=1 00:32:36.850 --rc genhtml_legend=1 00:32:36.850 --rc geninfo_all_blocks=1 00:32:36.850 --rc geninfo_unexecuted_blocks=1 00:32:36.850 00:32:36.850 ' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.850 09:34:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.132 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.133 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:42.393 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:42.393 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:42.393 Found net devices under 0000:86:00.0: cvl_0_0 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:42.393 Found net devices under 0000:86:00.1: cvl_0_1 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:32:42.393 00:32:42.393 --- 10.0.0.2 ping statistics --- 00:32:42.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.393 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:42.393 00:32:42.393 --- 10.0.0.1 ping statistics --- 00:32:42.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.393 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:42.393 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:42.394 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1351198 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1351198 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 1351198 ']' 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:42.653 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.653 [2024-11-19 09:34:43.526094] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:42.653 [2024-11-19 09:34:43.526981] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:32:42.653 [2024-11-19 09:34:43.527014] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.653 [2024-11-19 09:34:43.591091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:42.653 [2024-11-19 09:34:43.633351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.653 [2024-11-19 09:34:43.633386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.653 [2024-11-19 09:34:43.633393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.653 [2024-11-19 09:34:43.633400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.653 [2024-11-19 09:34:43.633405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.653 [2024-11-19 09:34:43.634587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.653 [2024-11-19 09:34:43.634594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.653 [2024-11-19 09:34:43.701500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:42.653 [2024-11-19 09:34:43.701648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:42.653 [2024-11-19 09:34:43.701736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.913 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:42.914 5000+0 records in 00:32:42.914 5000+0 records out 00:32:42.914 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0181091 s, 565 MB/s 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.914 AIO0 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.914 [2024-11-19 09:34:43.835361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.914 [2024-11-19 09:34:43.875634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1351198 0 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1351198 0 idle 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:42.914 09:34:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:43.173 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351198 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0' 00:32:43.173 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351198 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0 00:32:43.173 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.173 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.173 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:43.173 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1351198 1 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1351198 1 idle 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:43.174 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:43.431 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351204 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:43.431 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351204 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:43.431 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.431 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1351455 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1351198 0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1351198 0 busy 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351198 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.42 reactor_0' 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351198 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.42 reactor_0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1351198 1 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1351198 1 busy 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:43.432 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351204 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.27 reactor_1' 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351204 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.27 reactor_1 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:43.690 09:34:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1351455 00:32:53.654 Initializing NVMe Controllers 00:32:53.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:53.654 Controller IO queue size 256, less than required. 00:32:53.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:53.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:53.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:53.654 Initialization complete. Launching workers. 00:32:53.654 ======================================================== 00:32:53.654 Latency(us) 00:32:53.654 Device Information : IOPS MiB/s Average min max 00:32:53.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16069.70 62.77 15938.53 4081.45 29941.74 00:32:53.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16266.90 63.54 15741.92 7404.85 26495.18 00:32:53.654 ======================================================== 00:32:53.654 Total : 32336.59 126.31 15839.62 4081.45 29941.74 00:32:53.654 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1351198 0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1351198 0 idle 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351198 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0' 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351198 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1351198 1 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1351198 1 idle 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:53.654 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351204 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1' 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351204 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:09.99 reactor_1 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:53.913 09:34:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:54.172 09:34:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:54.172 09:34:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:32:54.172 09:34:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:32:54.172 09:34:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:32:54.172 09:34:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1351198 0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1351198 0 idle 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351198 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.49 reactor_0' 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351198 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.49 reactor_0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1351198 1 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1351198 1 idle 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1351198 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1351198 -w 256 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1351204 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1351204 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.774 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:57.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.082 rmmod nvme_tcp 00:32:57.082 rmmod nvme_fabrics 00:32:57.082 rmmod nvme_keyring 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1351198 ']' 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1351198 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 1351198 ']' 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 1351198 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1351198 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1351198' 00:32:57.082 killing process with pid 1351198 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 1351198 00:32:57.082 09:34:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 1351198 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:57.341 09:34:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.252 09:35:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.252 00:32:59.252 real 0m22.867s 00:32:59.252 user 0m39.388s 00:32:59.252 sys 0m8.608s 00:32:59.252 09:35:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:59.252 09:35:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:59.252 ************************************ 00:32:59.252 END TEST nvmf_interrupt 00:32:59.252 ************************************ 00:32:59.252 00:32:59.252 real 27m24.104s 00:32:59.252 user 56m27.088s 00:32:59.252 sys 9m20.885s 00:32:59.252 09:35:00 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:59.252 09:35:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.252 ************************************ 00:32:59.252 END TEST nvmf_tcp 00:32:59.252 ************************************ 00:32:59.511 09:35:00 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:32:59.511 09:35:00 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:59.511 09:35:00 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:59.511 09:35:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:59.511 09:35:00 -- common/autotest_common.sh@10 -- # set +x 00:32:59.511 ************************************ 00:32:59.511 START TEST spdkcli_nvmf_tcp 00:32:59.511 ************************************ 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:59.511 * Looking for test storage... 00:32:59.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:59.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.511 --rc genhtml_branch_coverage=1 00:32:59.511 --rc genhtml_function_coverage=1 00:32:59.511 --rc genhtml_legend=1 00:32:59.511 --rc geninfo_all_blocks=1 00:32:59.511 --rc geninfo_unexecuted_blocks=1 00:32:59.511 00:32:59.511 ' 00:32:59.511 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:59.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.511 --rc genhtml_branch_coverage=1 00:32:59.511 --rc genhtml_function_coverage=1 00:32:59.512 --rc genhtml_legend=1 00:32:59.512 --rc geninfo_all_blocks=1 00:32:59.512 --rc geninfo_unexecuted_blocks=1 00:32:59.512 00:32:59.512 ' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:59.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.512 --rc genhtml_branch_coverage=1 00:32:59.512 --rc genhtml_function_coverage=1 00:32:59.512 --rc genhtml_legend=1 00:32:59.512 --rc geninfo_all_blocks=1 00:32:59.512 --rc geninfo_unexecuted_blocks=1 00:32:59.512 00:32:59.512 ' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:59.512 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.512 --rc genhtml_branch_coverage=1 00:32:59.512 --rc genhtml_function_coverage=1 00:32:59.512 --rc genhtml_legend=1 00:32:59.512 --rc geninfo_all_blocks=1 00:32:59.512 --rc geninfo_unexecuted_blocks=1 00:32:59.512 00:32:59.512 ' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:59.512 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1354148 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1354148 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 1354148 ']' 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:59.770 09:35:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.770 [2024-11-19 09:35:00.614840] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:32:59.770 [2024-11-19 09:35:00.614889] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354148 ] 00:32:59.770 [2024-11-19 09:35:00.689457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.770 [2024-11-19 09:35:00.735220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.770 [2024-11-19 09:35:00.735221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:00.703 09:35:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:00.703 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:00.703 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:00.703 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:00.703 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:00.703 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:00.703 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:00.703 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:00.703 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:00.703 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:00.703 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:00.703 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4 secure_channel=True allow_any_host=True'\'' 00:33:00.703 '\''/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:00.703 ' 00:33:03.231 [2024-11-19 09:35:04.191312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.608 [2024-11-19 09:35:05.523757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:07.141 [2024-11-19 09:35:08.011386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:09.674 [2024-11-19 09:35:10.154229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:11.576 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:11.576 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:11.576 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:11.576 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:11.576 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:11.576 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:11.576 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:11.576 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:11.576 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:11.576 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:11.576 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:11.576 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4 secure_channel=True allow_any_host=True', False] 00:33:11.576 Executing command: ['/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@67 -- # timing_exit spdkcli_create_nvmf_config 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # timing_enter spdkcli_check_match 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # check_match 00:33:11.576 09:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@71 -- # timing_exit spdkcli_check_match 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@73 -- # timing_enter spdkcli_clear_nvmf_config 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:11.835 09:35:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:11.835 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:11.835 '\''/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:11.835 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:11.835 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:11.835 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:11.835 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:11.835 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:11.835 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:11.835 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:11.835 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:11.835 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:11.835 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:11.835 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:11.835 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:11.835 ' 00:33:18.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:18.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:18.403 Executing command: ['/nvmf/referral/nqn.2014-08.org.nvmexpress.discovery/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:18.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:18.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:18.403 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:18.403 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:18.403 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:18.403 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:18.403 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:18.403 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:18.403 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:18.403 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # timing_exit spdkcli_clear_nvmf_config 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@92 -- # killprocess 1354148 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1354148 ']' 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1354148 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1354148 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1354148' 00:33:18.403 killing process with pid 1354148 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 1354148 00:33:18.403 09:35:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 1354148 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1354148 ']' 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1354148 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1354148 ']' 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1354148 00:33:18.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1354148) - No such process 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 1354148 is not found' 00:33:18.403 Process with pid 1354148 is not found 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:18.403 00:33:18.403 real 0m18.663s 00:33:18.403 user 0m41.122s 00:33:18.403 sys 0m0.825s 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:18.403 09:35:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:18.403 ************************************ 00:33:18.403 END TEST spdkcli_nvmf_tcp 00:33:18.403 ************************************ 00:33:18.403 09:35:19 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:18.403 09:35:19 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:33:18.403 09:35:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:18.403 09:35:19 -- common/autotest_common.sh@10 -- # set +x 00:33:18.403 ************************************ 00:33:18.403 START TEST nvmf_identify_passthru 00:33:18.403 ************************************ 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:18.403 * Looking for test storage... 00:33:18.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:18.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.403 --rc genhtml_branch_coverage=1 00:33:18.403 --rc genhtml_function_coverage=1 00:33:18.403 --rc genhtml_legend=1 00:33:18.403 --rc geninfo_all_blocks=1 00:33:18.403 --rc geninfo_unexecuted_blocks=1 00:33:18.403 00:33:18.403 ' 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:18.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.403 --rc genhtml_branch_coverage=1 00:33:18.403 --rc genhtml_function_coverage=1 00:33:18.403 --rc genhtml_legend=1 00:33:18.403 --rc geninfo_all_blocks=1 00:33:18.403 --rc geninfo_unexecuted_blocks=1 00:33:18.403 00:33:18.403 ' 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:18.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.403 --rc genhtml_branch_coverage=1 00:33:18.403 --rc genhtml_function_coverage=1 00:33:18.403 --rc genhtml_legend=1 00:33:18.403 --rc geninfo_all_blocks=1 00:33:18.403 --rc geninfo_unexecuted_blocks=1 00:33:18.403 00:33:18.403 ' 00:33:18.403 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:18.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.403 --rc genhtml_branch_coverage=1 00:33:18.403 --rc genhtml_function_coverage=1 00:33:18.403 --rc genhtml_legend=1 00:33:18.403 --rc geninfo_all_blocks=1 00:33:18.403 --rc geninfo_unexecuted_blocks=1 00:33:18.403 00:33:18.403 ' 00:33:18.403 09:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.403 09:35:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.403 09:35:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.403 09:35:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.403 09:35:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.403 09:35:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:18.403 09:35:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.403 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.404 09:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.404 09:35:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.404 09:35:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.404 09:35:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.404 09:35:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.404 09:35:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.404 09:35:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.404 09:35:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.404 09:35:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:18.404 09:35:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.404 09:35:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.404 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:18.404 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.404 09:35:19 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.404 09:35:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:24.975 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:24.975 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:24.975 Found net devices under 0000:86:00.0: cvl_0_0 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:24.975 Found net devices under 0000:86:00.1: cvl_0_1 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.975 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.976 09:35:24 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:33:24.976 00:33:24.976 --- 10.0.0.2 ping statistics --- 00:33:24.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.976 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:33:24.976 00:33:24.976 --- 10.0.0.1 ping statistics --- 00:33:24.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.976 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:24.976 09:35:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:33:24.976 09:35:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:24.976 09:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:29.173 09:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:29.173 09:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:29.173 09:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:29.173 09:35:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1361628 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.374 09:35:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1361628 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 1361628 ']' 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:33.374 09:35:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.374 [2024-11-19 09:35:33.646989] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:33:33.374 [2024-11-19 09:35:33.647038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.374 [2024-11-19 09:35:33.726523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:33.374 [2024-11-19 09:35:33.770080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.374 [2024-11-19 09:35:33.770117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.374 [2024-11-19 09:35:33.770124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.374 [2024-11-19 09:35:33.770130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.374 [2024-11-19 09:35:33.770136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.374 [2024-11-19 09:35:33.774968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.374 [2024-11-19 09:35:33.774994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:33.374 [2024-11-19 09:35:33.775111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.374 [2024-11-19 09:35:33.775112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:33.634 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:33.634 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:33:33.634 09:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:33.634 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.634 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.634 INFO: Log level set to 20 00:33:33.634 INFO: Requests: 00:33:33.634 { 00:33:33.634 "jsonrpc": "2.0", 00:33:33.634 "method": "nvmf_set_config", 00:33:33.634 "id": 1, 00:33:33.634 "params": { 00:33:33.634 "admin_cmd_passthru": { 00:33:33.634 "identify_ctrlr": true 00:33:33.634 } 00:33:33.634 } 00:33:33.634 } 00:33:33.634 00:33:33.634 INFO: response: 00:33:33.634 { 00:33:33.634 "jsonrpc": "2.0", 00:33:33.635 "id": 1, 00:33:33.635 "result": true 00:33:33.635 } 00:33:33.635 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.635 09:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.635 INFO: Setting log level to 20 00:33:33.635 INFO: Setting log level to 20 00:33:33.635 INFO: Log level set to 20 00:33:33.635 INFO: Log level set to 20 00:33:33.635 INFO: Requests: 00:33:33.635 { 00:33:33.635 "jsonrpc": "2.0", 00:33:33.635 "method": "framework_start_init", 00:33:33.635 "id": 1 00:33:33.635 } 00:33:33.635 00:33:33.635 INFO: Requests: 00:33:33.635 { 00:33:33.635 "jsonrpc": "2.0", 00:33:33.635 "method": "framework_start_init", 00:33:33.635 "id": 1 00:33:33.635 } 00:33:33.635 00:33:33.635 [2024-11-19 09:35:34.585269] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:33.635 INFO: response: 00:33:33.635 { 00:33:33.635 "jsonrpc": "2.0", 00:33:33.635 "id": 1, 00:33:33.635 "result": true 00:33:33.635 } 00:33:33.635 00:33:33.635 INFO: response: 00:33:33.635 { 00:33:33.635 "jsonrpc": "2.0", 00:33:33.635 "id": 1, 00:33:33.635 "result": true 00:33:33.635 } 00:33:33.635 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.635 09:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.635 INFO: Setting log level to 40 00:33:33.635 INFO: Setting log level to 40 00:33:33.635 INFO: Setting log level to 40 00:33:33.635 [2024-11-19 09:35:34.598618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.635 09:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:33.635 09:35:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.635 09:35:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.923 Nvme0n1 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.923 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.923 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.923 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.923 [2024-11-19 09:35:37.509520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.923 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.923 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.924 [ 00:33:36.924 { 00:33:36.924 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:36.924 "subtype": "Discovery", 00:33:36.924 "listen_addresses": [], 00:33:36.924 "allow_any_host": true, 00:33:36.924 "hosts": [] 00:33:36.924 }, 00:33:36.924 { 00:33:36.924 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:36.924 "subtype": "NVMe", 00:33:36.924 "listen_addresses": [ 00:33:36.924 { 00:33:36.924 "trtype": "TCP", 00:33:36.924 "adrfam": "IPv4", 00:33:36.924 "traddr": "10.0.0.2", 00:33:36.924 "trsvcid": "4420" 00:33:36.924 } 00:33:36.924 ], 00:33:36.924 "allow_any_host": true, 00:33:36.924 "hosts": [], 00:33:36.924 "serial_number": "SPDK00000000000001", 00:33:36.924 "model_number": "SPDK bdev Controller", 00:33:36.924 "max_namespaces": 1, 00:33:36.924 "min_cntlid": 1, 00:33:36.924 "max_cntlid": 65519, 00:33:36.924 "namespaces": [ 00:33:36.924 { 00:33:36.924 "nsid": 1, 00:33:36.924 "bdev_name": "Nvme0n1", 00:33:36.924 "name": "Nvme0n1", 00:33:36.924 "nguid": "BC43B310156A4A33B594A1230E00978B", 00:33:36.924 "uuid": "bc43b310-156a-4a33-b594-a1230e00978b" 00:33:36.924 } 00:33:36.924 ] 00:33:36.924 } 00:33:36.924 ] 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:36.924 09:35:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.924 rmmod nvme_tcp 00:33:36.924 rmmod nvme_fabrics 00:33:36.924 rmmod nvme_keyring 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1361628 ']' 00:33:36.924 09:35:37 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1361628 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 1361628 ']' 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 1361628 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:36.924 09:35:37 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1361628 00:33:37.183 09:35:38 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:37.183 09:35:38 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:37.183 09:35:38 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1361628' 00:33:37.183 killing process with pid 1361628 00:33:37.183 09:35:38 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 1361628 00:33:37.183 09:35:38 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 1361628 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.560 09:35:39 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.560 09:35:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.561 09:35:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.097 09:35:41 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.097 00:33:41.097 real 0m22.479s 00:33:41.097 user 0m29.733s 00:33:41.097 sys 0m6.145s 00:33:41.097 09:35:41 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:41.097 09:35:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:41.097 ************************************ 00:33:41.097 END TEST nvmf_identify_passthru 00:33:41.097 ************************************ 00:33:41.097 09:35:41 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:41.097 09:35:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:41.097 09:35:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:41.097 09:35:41 -- common/autotest_common.sh@10 -- # set +x 00:33:41.097 ************************************ 00:33:41.097 START TEST nvmf_dif 00:33:41.097 ************************************ 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:41.097 * Looking for test storage... 00:33:41.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:41.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.097 --rc genhtml_branch_coverage=1 00:33:41.097 --rc genhtml_function_coverage=1 00:33:41.097 --rc genhtml_legend=1 00:33:41.097 --rc geninfo_all_blocks=1 00:33:41.097 --rc geninfo_unexecuted_blocks=1 00:33:41.097 00:33:41.097 ' 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:41.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.097 --rc genhtml_branch_coverage=1 00:33:41.097 --rc genhtml_function_coverage=1 00:33:41.097 --rc genhtml_legend=1 00:33:41.097 --rc geninfo_all_blocks=1 00:33:41.097 --rc geninfo_unexecuted_blocks=1 00:33:41.097 00:33:41.097 ' 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:41.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.097 --rc genhtml_branch_coverage=1 00:33:41.097 --rc genhtml_function_coverage=1 00:33:41.097 --rc genhtml_legend=1 00:33:41.097 --rc geninfo_all_blocks=1 00:33:41.097 --rc geninfo_unexecuted_blocks=1 00:33:41.097 00:33:41.097 ' 00:33:41.097 09:35:41 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:41.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.097 --rc genhtml_branch_coverage=1 00:33:41.097 --rc genhtml_function_coverage=1 00:33:41.097 --rc genhtml_legend=1 00:33:41.097 --rc geninfo_all_blocks=1 00:33:41.097 --rc geninfo_unexecuted_blocks=1 00:33:41.097 00:33:41.097 ' 00:33:41.097 09:35:41 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.097 09:35:41 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.097 09:35:41 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.097 09:35:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.097 09:35:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.098 09:35:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.098 09:35:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:41.098 09:35:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:41.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.098 09:35:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:41.098 09:35:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:41.098 09:35:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:41.098 09:35:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:41.098 09:35:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.098 09:35:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:41.098 09:35:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.098 09:35:41 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.098 09:35:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:46.375 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:46.375 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:46.375 Found net devices under 0000:86:00.0: cvl_0_0 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:46.375 Found net devices under 0000:86:00.1: cvl_0_1 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.375 09:35:47 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.634 09:35:47 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:33:46.634 00:33:46.634 --- 10.0.0.2 ping statistics --- 00:33:46.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.634 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:33:46.635 09:35:47 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:33:46.635 00:33:46.635 --- 10.0.0.1 ping statistics --- 00:33:46.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.635 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:33:46.635 09:35:47 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.635 09:35:47 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:46.635 09:35:47 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:46.635 09:35:47 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:49.922 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:49.922 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:49.922 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.922 09:35:50 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:49.922 09:35:50 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.922 09:35:50 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.922 09:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1367106 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:49.922 09:35:50 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1367106 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 1367106 ']' 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 [2024-11-19 09:35:50.619487] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:33:49.923 [2024-11-19 09:35:50.619533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.923 [2024-11-19 09:35:50.700955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.923 [2024-11-19 09:35:50.742450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.923 [2024-11-19 09:35:50.742487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.923 [2024-11-19 09:35:50.742495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.923 [2024-11-19 09:35:50.742501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.923 [2024-11-19 09:35:50.742506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.923 [2024-11-19 09:35:50.743094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:33:49.923 09:35:50 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 09:35:50 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.923 09:35:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:49.923 09:35:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 [2024-11-19 09:35:50.878631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.923 09:35:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 ************************************ 00:33:49.923 START TEST fio_dif_1_default 00:33:49.923 ************************************ 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 bdev_null0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.923 [2024-11-19 09:35:50.946944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:49.923 { 00:33:49.923 "params": { 00:33:49.923 "name": "Nvme$subsystem", 00:33:49.923 "trtype": "$TEST_TRANSPORT", 00:33:49.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.923 "adrfam": "ipv4", 00:33:49.923 "trsvcid": "$NVMF_PORT", 00:33:49.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.923 "hdgst": ${hdgst:-false}, 00:33:49.923 "ddgst": ${ddgst:-false} 00:33:49.923 }, 00:33:49.923 "method": "bdev_nvme_attach_controller" 00:33:49.923 } 00:33:49.923 EOF 00:33:49.923 )") 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:49.923 09:35:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:49.923 "params": { 00:33:49.923 "name": "Nvme0", 00:33:49.923 "trtype": "tcp", 00:33:49.923 "traddr": "10.0.0.2", 00:33:49.923 "adrfam": "ipv4", 00:33:49.923 "trsvcid": "4420", 00:33:49.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.923 "hdgst": false, 00:33:49.923 "ddgst": false 00:33:49.923 }, 00:33:49.923 "method": "bdev_nvme_attach_controller" 00:33:49.923 }' 00:33:50.210 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:50.210 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:50.210 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.210 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:50.210 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:50.210 09:35:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:50.210 09:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:50.210 09:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:50.210 09:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:50.210 09:35:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.473 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:50.473 fio-3.35 00:33:50.473 Starting 1 thread 00:34:02.679 00:34:02.679 filename0: (groupid=0, jobs=1): err= 0: pid=1367475: Tue Nov 19 09:36:01 2024 00:34:02.679 read: IOPS=200, BW=802KiB/s (821kB/s)(8032KiB/10021msec) 00:34:02.679 slat (nsec): min=5827, max=25904, avg=6126.54, stdev=902.22 00:34:02.679 clat (usec): min=377, max=45075, avg=19943.48, stdev=20462.75 00:34:02.679 lat (usec): min=383, max=45101, avg=19949.61, stdev=20462.73 00:34:02.679 clat percentiles (usec): 00:34:02.680 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 420], 00:34:02.680 | 30.00th=[ 433], 40.00th=[ 478], 50.00th=[ 537], 60.00th=[40633], 00:34:02.680 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:02.680 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:34:02.680 | 99.99th=[44827] 00:34:02.680 bw ( KiB/s): min= 736, max= 1088, per=99.94%, avg=801.60, stdev=77.33, samples=20 00:34:02.680 iops : min= 184, max= 272, avg=200.40, stdev=19.33, samples=20 00:34:02.680 lat (usec) : 500=44.27%, 750=8.12% 00:34:02.680 lat (msec) : 50=47.61% 00:34:02.680 cpu : usr=92.49%, sys=7.23%, ctx=33, majf=0, minf=0 00:34:02.680 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.680 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.680 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.680 00:34:02.680 Run status group 0 (all jobs): 00:34:02.680 READ: bw=802KiB/s (821kB/s), 802KiB/s-802KiB/s (821kB/s-821kB/s), io=8032KiB (8225kB), run=10021-10021msec 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 00:34:02.680 real 0m11.196s 00:34:02.680 user 0m16.026s 00:34:02.680 sys 0m1.031s 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 ************************************ 00:34:02.680 END TEST fio_dif_1_default 00:34:02.680 ************************************ 00:34:02.680 09:36:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:02.680 09:36:02 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:02.680 09:36:02 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 ************************************ 00:34:02.680 START TEST fio_dif_1_multi_subsystems 00:34:02.680 ************************************ 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 bdev_null0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 [2024-11-19 09:36:02.211535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 bdev_null1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.680 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.681 { 00:34:02.681 "params": { 00:34:02.681 "name": "Nvme$subsystem", 00:34:02.681 "trtype": "$TEST_TRANSPORT", 00:34:02.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.681 "adrfam": "ipv4", 00:34:02.681 "trsvcid": "$NVMF_PORT", 00:34:02.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.681 "hdgst": ${hdgst:-false}, 00:34:02.681 "ddgst": ${ddgst:-false} 00:34:02.681 }, 00:34:02.681 "method": "bdev_nvme_attach_controller" 00:34:02.681 } 00:34:02.681 EOF 00:34:02.681 )") 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.681 { 00:34:02.681 "params": { 00:34:02.681 "name": "Nvme$subsystem", 00:34:02.681 "trtype": "$TEST_TRANSPORT", 00:34:02.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.681 "adrfam": "ipv4", 00:34:02.681 "trsvcid": "$NVMF_PORT", 00:34:02.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.681 "hdgst": ${hdgst:-false}, 00:34:02.681 "ddgst": ${ddgst:-false} 00:34:02.681 }, 00:34:02.681 "method": "bdev_nvme_attach_controller" 00:34:02.681 } 00:34:02.681 EOF 00:34:02.681 )") 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.681 "params": { 00:34:02.681 "name": "Nvme0", 00:34:02.681 "trtype": "tcp", 00:34:02.681 "traddr": "10.0.0.2", 00:34:02.681 "adrfam": "ipv4", 00:34:02.681 "trsvcid": "4420", 00:34:02.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.681 "hdgst": false, 00:34:02.681 "ddgst": false 00:34:02.681 }, 00:34:02.681 "method": "bdev_nvme_attach_controller" 00:34:02.681 },{ 00:34:02.681 "params": { 00:34:02.681 "name": "Nvme1", 00:34:02.681 "trtype": "tcp", 00:34:02.681 "traddr": "10.0.0.2", 00:34:02.681 "adrfam": "ipv4", 00:34:02.681 "trsvcid": "4420", 00:34:02.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.681 "hdgst": false, 00:34:02.681 "ddgst": false 00:34:02.681 }, 00:34:02.681 "method": "bdev_nvme_attach_controller" 00:34:02.681 }' 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:02.681 09:36:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.681 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:02.681 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:02.681 fio-3.35 00:34:02.681 Starting 2 threads 00:34:12.646 00:34:12.646 filename0: (groupid=0, jobs=1): err= 0: pid=1369573: Tue Nov 19 09:36:13 2024 00:34:12.646 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:34:12.646 slat (nsec): min=5921, max=54590, avg=7808.34, stdev=2873.11 00:34:12.646 clat (usec): min=40785, max=42257, avg=40999.53, stdev=157.96 00:34:12.646 lat (usec): min=40791, max=42311, avg=41007.34, stdev=158.67 00:34:12.646 clat percentiles (usec): 00:34:12.646 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:12.646 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:12.646 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:12.646 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:12.646 | 99.99th=[42206] 00:34:12.646 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:34:12.646 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:12.646 lat (msec) : 50=100.00% 00:34:12.646 cpu : usr=96.52%, sys=3.21%, ctx=7, majf=0, minf=138 00:34:12.646 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.646 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.646 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.646 filename1: (groupid=0, jobs=1): err= 0: pid=1369574: Tue Nov 19 09:36:13 2024 00:34:12.646 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:34:12.646 slat (nsec): min=5918, max=54823, avg=7880.60, stdev=3075.21 00:34:12.646 clat (usec): min=40722, max=42056, avg=41011.50, stdev=182.38 00:34:12.646 lat (usec): min=40728, max=42067, avg=41019.38, stdev=182.63 00:34:12.646 clat percentiles (usec): 00:34:12.646 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:12.646 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:12.646 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:12.646 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:12.646 | 99.99th=[42206] 00:34:12.646 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:34:12.646 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:12.646 lat (msec) : 50=100.00% 00:34:12.646 cpu : usr=96.53%, sys=3.20%, ctx=10, majf=0, minf=144 00:34:12.646 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.646 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.646 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.646 00:34:12.646 Run status group 0 (all jobs): 00:34:12.646 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10010-10013msec 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 00:34:12.905 real 0m11.567s 00:34:12.905 user 0m26.606s 00:34:12.905 sys 0m1.032s 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 ************************************ 00:34:12.905 END TEST fio_dif_1_multi_subsystems 00:34:12.905 ************************************ 00:34:12.905 09:36:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:12.905 09:36:13 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:12.905 09:36:13 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 ************************************ 00:34:12.905 START TEST fio_dif_rand_params 00:34:12.905 ************************************ 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 bdev_null0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.905 [2024-11-19 09:36:13.852431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:12.905 { 00:34:12.905 "params": { 00:34:12.905 "name": "Nvme$subsystem", 00:34:12.905 "trtype": "$TEST_TRANSPORT", 00:34:12.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.905 "adrfam": "ipv4", 00:34:12.905 "trsvcid": "$NVMF_PORT", 00:34:12.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.905 "hdgst": ${hdgst:-false}, 00:34:12.905 "ddgst": ${ddgst:-false} 00:34:12.905 }, 00:34:12.905 "method": "bdev_nvme_attach_controller" 00:34:12.905 } 00:34:12.905 EOF 00:34:12.905 )") 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.905 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:12.906 "params": { 00:34:12.906 "name": "Nvme0", 00:34:12.906 "trtype": "tcp", 00:34:12.906 "traddr": "10.0.0.2", 00:34:12.906 "adrfam": "ipv4", 00:34:12.906 "trsvcid": "4420", 00:34:12.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.906 "hdgst": false, 00:34:12.906 "ddgst": false 00:34:12.906 }, 00:34:12.906 "method": "bdev_nvme_attach_controller" 00:34:12.906 }' 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.906 09:36:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.479 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:13.479 ... 00:34:13.479 fio-3.35 00:34:13.479 Starting 3 threads 00:34:18.832 00:34:18.832 filename0: (groupid=0, jobs=1): err= 0: pid=1371924: Tue Nov 19 09:36:19 2024 00:34:18.832 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(191MiB/5046msec) 00:34:18.832 slat (nsec): min=5979, max=54802, avg=17716.29, stdev=6728.77 00:34:18.832 clat (usec): min=3702, max=51777, avg=9878.41, stdev=8271.84 00:34:18.832 lat (usec): min=3714, max=51784, avg=9896.13, stdev=8271.25 00:34:18.832 clat percentiles (usec): 00:34:18.832 | 1.00th=[ 4228], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7373], 00:34:18.832 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:34:18.832 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[10683], 00:34:18.832 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[51643], 00:34:18.832 | 99.99th=[51643] 00:34:18.832 bw ( KiB/s): min=23808, max=47616, per=32.57%, avg=38963.20, stdev=7615.06, samples=10 00:34:18.832 iops : min= 186, max= 372, avg=304.40, stdev=59.49, samples=10 00:34:18.832 lat (msec) : 4=0.66%, 10=91.67%, 20=3.41%, 50=3.93%, 100=0.33% 00:34:18.832 cpu : usr=95.02%, sys=4.28%, ctx=86, majf=0, minf=95 00:34:18.832 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.832 issued rwts: total=1525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.832 filename0: (groupid=0, jobs=1): err= 0: pid=1371925: Tue Nov 19 09:36:19 2024 00:34:18.832 read: IOPS=317, BW=39.7MiB/s (41.7MB/s)(199MiB/5003msec) 00:34:18.832 slat (nsec): min=6240, max=39977, avg=15526.71, stdev=7517.29 00:34:18.832 clat (usec): min=3120, max=50610, avg=9421.07, stdev=4741.11 00:34:18.832 lat (usec): min=3134, max=50620, avg=9436.60, stdev=4741.78 00:34:18.832 clat percentiles (usec): 00:34:18.832 | 1.00th=[ 3851], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 7046], 00:34:18.832 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9765], 00:34:18.832 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:34:18.832 | 99.00th=[44827], 99.50th=[47973], 99.90th=[49546], 99.95th=[50594], 00:34:18.832 | 99.99th=[50594] 00:34:18.832 bw ( KiB/s): min=38656, max=44544, per=33.98%, avg=40652.80, stdev=2158.11, samples=10 00:34:18.832 iops : min= 302, max= 348, avg=317.60, stdev=16.86, samples=10 00:34:18.832 lat (msec) : 4=1.38%, 10=64.03%, 20=33.27%, 50=1.26%, 100=0.06% 00:34:18.832 cpu : usr=96.74%, sys=2.94%, ctx=8, majf=0, minf=94 00:34:18.832 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.832 issued rwts: total=1590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.832 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.832 filename0: (groupid=0, jobs=1): err= 0: pid=1371926: Tue Nov 19 09:36:19 2024 00:34:18.832 read: IOPS=317, BW=39.7MiB/s (41.6MB/s)(200MiB/5044msec) 00:34:18.832 slat (nsec): min=6117, max=59607, avg=15809.38, stdev=7820.32 00:34:18.833 clat (usec): min=3254, max=51763, avg=9406.87, stdev=5997.83 00:34:18.833 lat (usec): min=3262, max=51791, avg=9422.68, stdev=5997.95 00:34:18.833 clat percentiles (usec): 00:34:18.833 | 1.00th=[ 3884], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6915], 00:34:18.833 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:34:18.833 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[11207], 00:34:18.833 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[51643], 00:34:18.833 | 99.99th=[51643] 00:34:18.833 bw ( KiB/s): min=27136, max=46336, per=34.22%, avg=40934.40, stdev=5354.22, samples=10 00:34:18.833 iops : min= 212, max= 362, avg=319.80, stdev=41.83, samples=10 00:34:18.833 lat (msec) : 4=1.12%, 10=77.70%, 20=18.99%, 50=1.94%, 100=0.25% 00:34:18.833 cpu : usr=97.16%, sys=2.52%, ctx=8, majf=0, minf=153 00:34:18.833 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.833 issued rwts: total=1601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.833 00:34:18.833 Run status group 0 (all jobs): 00:34:18.833 READ: bw=117MiB/s (123MB/s), 37.8MiB/s-39.7MiB/s (39.6MB/s-41.7MB/s), io=590MiB (618MB), run=5003-5046msec 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.091 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 bdev_null0 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 [2024-11-19 09:36:19.974319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 bdev_null1 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 bdev_null2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.092 { 00:34:19.092 "params": { 00:34:19.092 "name": "Nvme$subsystem", 00:34:19.092 "trtype": "$TEST_TRANSPORT", 00:34:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.092 "adrfam": "ipv4", 00:34:19.092 "trsvcid": "$NVMF_PORT", 00:34:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.092 "hdgst": ${hdgst:-false}, 00:34:19.092 "ddgst": ${ddgst:-false} 00:34:19.092 }, 00:34:19.092 "method": "bdev_nvme_attach_controller" 00:34:19.092 } 00:34:19.092 EOF 00:34:19.092 )") 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.092 { 00:34:19.092 "params": { 00:34:19.092 "name": "Nvme$subsystem", 00:34:19.092 "trtype": "$TEST_TRANSPORT", 00:34:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.092 "adrfam": "ipv4", 00:34:19.092 "trsvcid": "$NVMF_PORT", 00:34:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.092 "hdgst": ${hdgst:-false}, 00:34:19.092 "ddgst": ${ddgst:-false} 00:34:19.092 }, 00:34:19.092 "method": "bdev_nvme_attach_controller" 00:34:19.092 } 00:34:19.092 EOF 00:34:19.092 )") 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.092 { 00:34:19.092 "params": { 00:34:19.092 "name": "Nvme$subsystem", 00:34:19.092 "trtype": "$TEST_TRANSPORT", 00:34:19.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.092 "adrfam": "ipv4", 00:34:19.092 "trsvcid": "$NVMF_PORT", 00:34:19.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.092 "hdgst": ${hdgst:-false}, 00:34:19.092 "ddgst": ${ddgst:-false} 00:34:19.092 }, 00:34:19.092 "method": "bdev_nvme_attach_controller" 00:34:19.092 } 00:34:19.092 EOF 00:34:19.092 )") 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:19.092 09:36:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:19.092 "params": { 00:34:19.092 "name": "Nvme0", 00:34:19.093 "trtype": "tcp", 00:34:19.093 "traddr": "10.0.0.2", 00:34:19.093 "adrfam": "ipv4", 00:34:19.093 "trsvcid": "4420", 00:34:19.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:19.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:19.093 "hdgst": false, 00:34:19.093 "ddgst": false 00:34:19.093 }, 00:34:19.093 "method": "bdev_nvme_attach_controller" 00:34:19.093 },{ 00:34:19.093 "params": { 00:34:19.093 "name": "Nvme1", 00:34:19.093 "trtype": "tcp", 00:34:19.093 "traddr": "10.0.0.2", 00:34:19.093 "adrfam": "ipv4", 00:34:19.093 "trsvcid": "4420", 00:34:19.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:19.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:19.093 "hdgst": false, 00:34:19.093 "ddgst": false 00:34:19.093 }, 00:34:19.093 "method": "bdev_nvme_attach_controller" 00:34:19.093 },{ 00:34:19.093 "params": { 00:34:19.093 "name": "Nvme2", 00:34:19.093 "trtype": "tcp", 00:34:19.093 "traddr": "10.0.0.2", 00:34:19.093 "adrfam": "ipv4", 00:34:19.093 "trsvcid": "4420", 00:34:19.093 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:19.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:19.093 "hdgst": false, 00:34:19.093 "ddgst": false 00:34:19.093 }, 00:34:19.093 "method": "bdev_nvme_attach_controller" 00:34:19.093 }' 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:19.093 09:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.664 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:19.664 ... 00:34:19.664 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:19.664 ... 00:34:19.664 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:19.664 ... 00:34:19.664 fio-3.35 00:34:19.664 Starting 24 threads 00:34:31.860 00:34:31.860 filename0: (groupid=0, jobs=1): err= 0: pid=1373095: Tue Nov 19 09:36:31 2024 00:34:31.860 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.3MiB/10013msec) 00:34:31.860 slat (nsec): min=7063, max=70067, avg=16796.25, stdev=7905.88 00:34:31.860 clat (usec): min=14780, max=41258, avg=27962.34, stdev=1407.84 00:34:31.860 lat (usec): min=14798, max=41317, avg=27979.14, stdev=1404.85 00:34:31.860 clat percentiles (usec): 00:34:31.860 | 1.00th=[20841], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.860 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.860 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.860 | 99.00th=[29754], 99.50th=[30540], 99.90th=[41157], 99.95th=[41157], 00:34:31.860 | 99.99th=[41157] 00:34:31.860 bw ( KiB/s): min= 2176, max= 2480, per=4.17%, avg=2274.40, stdev=76.69, samples=20 00:34:31.860 iops : min= 544, max= 620, avg=568.60, stdev=19.17, samples=20 00:34:31.860 lat (msec) : 20=0.82%, 50=99.18% 00:34:31.860 cpu : usr=98.52%, sys=1.14%, ctx=13, majf=0, minf=33 00:34:31.860 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:31.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.860 filename0: (groupid=0, jobs=1): err= 0: pid=1373096: Tue Nov 19 09:36:31 2024 00:34:31.860 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10007msec) 00:34:31.860 slat (nsec): min=8963, max=57331, avg=17018.49, stdev=4872.08 00:34:31.860 clat (usec): min=8057, max=49898, avg=28025.24, stdev=1834.20 00:34:31.860 lat (usec): min=8075, max=49922, avg=28042.25, stdev=1834.35 00:34:31.860 clat percentiles (usec): 00:34:31.860 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.860 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.860 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:31.860 | 99.00th=[29492], 99.50th=[32900], 99.90th=[49546], 99.95th=[50070], 00:34:31.860 | 99.99th=[50070] 00:34:31.860 bw ( KiB/s): min= 2052, max= 2432, per=4.15%, avg=2265.80, stdev=83.55, samples=20 00:34:31.860 iops : min= 513, max= 608, avg=566.45, stdev=20.89, samples=20 00:34:31.860 lat (msec) : 10=0.28%, 20=0.28%, 50=99.44% 00:34:31.860 cpu : usr=98.45%, sys=1.17%, ctx=13, majf=0, minf=38 00:34:31.860 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.860 filename0: (groupid=0, jobs=1): err= 0: pid=1373097: Tue Nov 19 09:36:31 2024 00:34:31.860 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.5MiB/10048msec) 00:34:31.860 slat (nsec): min=4211, max=62725, avg=16928.88, stdev=8141.85 00:34:31.860 clat (usec): min=8922, max=58186, avg=27814.80, stdev=3273.38 00:34:31.860 lat (usec): min=8936, max=58198, avg=27831.73, stdev=3273.51 00:34:31.860 clat percentiles (usec): 00:34:31.860 | 1.00th=[16712], 5.00th=[21103], 10.00th=[25560], 20.00th=[27657], 00:34:31.860 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.860 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[31589], 00:34:31.860 | 99.00th=[39060], 99.50th=[41681], 99.90th=[51119], 99.95th=[51119], 00:34:31.860 | 99.99th=[57934] 00:34:31.860 bw ( KiB/s): min= 2144, max= 2448, per=4.21%, avg=2294.40, stdev=79.31, samples=20 00:34:31.860 iops : min= 536, max= 612, avg=573.60, stdev=19.83, samples=20 00:34:31.860 lat (msec) : 10=0.21%, 20=3.41%, 50=96.24%, 100=0.14% 00:34:31.860 cpu : usr=98.49%, sys=1.18%, ctx=13, majf=0, minf=53 00:34:31.860 IO depths : 1=3.3%, 2=6.8%, 4=14.8%, 8=64.2%, 16=10.9%, 32=0.0%, >=64=0.0% 00:34:31.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 complete : 0=0.0%, 4=91.7%, 8=4.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 issued rwts: total=5748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.860 filename0: (groupid=0, jobs=1): err= 0: pid=1373098: Tue Nov 19 09:36:31 2024 00:34:31.860 read: IOPS=567, BW=2271KiB/s (2325kB/s)(22.2MiB/10005msec) 00:34:31.860 slat (nsec): min=6844, max=58827, avg=12248.12, stdev=4569.84 00:34:31.860 clat (usec): min=13896, max=41758, avg=28075.15, stdev=1321.24 00:34:31.860 lat (usec): min=13911, max=41768, avg=28087.40, stdev=1321.04 00:34:31.860 clat percentiles (usec): 00:34:31.860 | 1.00th=[24773], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.860 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:31.860 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28705], 95.00th=[28967], 00:34:31.860 | 99.00th=[29492], 99.50th=[32900], 99.90th=[41157], 99.95th=[41681], 00:34:31.860 | 99.99th=[41681] 00:34:31.860 bw ( KiB/s): min= 2176, max= 2320, per=4.16%, avg=2270.32, stdev=56.42, samples=19 00:34:31.860 iops : min= 544, max= 580, avg=567.58, stdev=14.10, samples=19 00:34:31.860 lat (msec) : 20=0.60%, 50=99.40% 00:34:31.860 cpu : usr=98.41%, sys=1.25%, ctx=14, majf=0, minf=61 00:34:31.860 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:31.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.860 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.860 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.860 filename0: (groupid=0, jobs=1): err= 0: pid=1373100: Tue Nov 19 09:36:31 2024 00:34:31.860 read: IOPS=585, BW=2342KiB/s (2398kB/s)(22.9MiB/10007msec) 00:34:31.860 slat (nsec): min=6724, max=62652, avg=20963.23, stdev=11725.26 00:34:31.860 clat (usec): min=8773, max=69263, avg=27162.52, stdev=3727.53 00:34:31.860 lat (usec): min=8780, max=69280, avg=27183.48, stdev=3729.54 00:34:31.860 clat percentiles (usec): 00:34:31.860 | 1.00th=[16909], 5.00th=[20317], 10.00th=[22152], 20.00th=[26870], 00:34:31.861 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28705], 95.00th=[32113], 00:34:31.861 | 99.00th=[35390], 99.50th=[41681], 99.90th=[49546], 99.95th=[50070], 00:34:31.861 | 99.99th=[69731] 00:34:31.861 bw ( KiB/s): min= 2176, max= 2640, per=4.29%, avg=2337.00, stdev=117.55, samples=20 00:34:31.861 iops : min= 544, max= 660, avg=584.25, stdev=29.39, samples=20 00:34:31.861 lat (msec) : 10=0.27%, 20=4.64%, 50=95.05%, 100=0.03% 00:34:31.861 cpu : usr=98.38%, sys=1.29%, ctx=10, majf=0, minf=47 00:34:31.861 IO depths : 1=3.2%, 2=6.6%, 4=15.0%, 8=64.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=91.6%, 8=4.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename0: (groupid=0, jobs=1): err= 0: pid=1373101: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=565, BW=2264KiB/s (2318kB/s)(22.1MiB/10009msec) 00:34:31.861 slat (nsec): min=6278, max=43297, avg=19272.99, stdev=5624.69 00:34:31.861 clat (usec): min=16376, max=47624, avg=28096.20, stdev=1247.67 00:34:31.861 lat (usec): min=16390, max=47641, avg=28115.47, stdev=1247.43 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.861 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[29754], 99.50th=[33817], 99.90th=[47449], 99.95th=[47449], 00:34:31.861 | 99.99th=[47449] 00:34:31.861 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2256.84, stdev=76.45, samples=19 00:34:31.861 iops : min= 512, max= 576, avg=564.21, stdev=19.11, samples=19 00:34:31.861 lat (msec) : 20=0.12%, 50=99.88% 00:34:31.861 cpu : usr=98.24%, sys=1.43%, ctx=14, majf=0, minf=30 00:34:31.861 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename0: (groupid=0, jobs=1): err= 0: pid=1373102: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10008msec) 00:34:31.861 slat (nsec): min=3997, max=65724, avg=27646.49, stdev=8880.40 00:34:31.861 clat (usec): min=8827, max=45583, avg=27931.98, stdev=1651.36 00:34:31.861 lat (usec): min=8842, max=45595, avg=27959.63, stdev=1651.51 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.861 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[29492], 99.50th=[32637], 99.90th=[45351], 99.95th=[45351], 00:34:31.861 | 99.99th=[45351] 00:34:31.861 bw ( KiB/s): min= 2171, max= 2432, per=4.15%, avg=2265.35, stdev=73.45, samples=20 00:34:31.861 iops : min= 542, max= 608, avg=566.30, stdev=18.41, samples=20 00:34:31.861 lat (msec) : 10=0.28%, 20=0.32%, 50=99.40% 00:34:31.861 cpu : usr=98.62%, sys=1.04%, ctx=15, majf=0, minf=28 00:34:31.861 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename0: (groupid=0, jobs=1): err= 0: pid=1373103: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10019msec) 00:34:31.861 slat (nsec): min=7060, max=62889, avg=23416.12, stdev=9656.09 00:34:31.861 clat (usec): min=14688, max=34292, avg=27961.97, stdev=1147.72 00:34:31.861 lat (usec): min=14728, max=34321, avg=27985.38, stdev=1146.85 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[23462], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.861 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[29230], 99.50th=[32637], 99.90th=[32900], 99.95th=[33817], 00:34:31.861 | 99.99th=[34341] 00:34:31.861 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:34:31.861 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:34:31.861 lat (msec) : 20=0.53%, 50=99.47% 00:34:31.861 cpu : usr=98.43%, sys=1.23%, ctx=15, majf=0, minf=50 00:34:31.861 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename1: (groupid=0, jobs=1): err= 0: pid=1373104: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=573, BW=2292KiB/s (2347kB/s)(22.4MiB/10024msec) 00:34:31.861 slat (nsec): min=6974, max=57643, avg=14593.99, stdev=7869.97 00:34:31.861 clat (usec): min=3368, max=33940, avg=27804.75, stdev=2445.39 00:34:31.861 lat (usec): min=3380, max=33955, avg=27819.35, stdev=2444.38 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[15270], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.861 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:31.861 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[29492], 99.50th=[30016], 99.90th=[33817], 99.95th=[33817], 00:34:31.861 | 99.99th=[33817] 00:34:31.861 bw ( KiB/s): min= 2176, max= 2693, per=4.20%, avg=2291.45, stdev=110.05, samples=20 00:34:31.861 iops : min= 544, max= 673, avg=572.85, stdev=27.46, samples=20 00:34:31.861 lat (msec) : 4=0.28%, 10=0.56%, 20=0.84%, 50=98.33% 00:34:31.861 cpu : usr=98.33%, sys=1.34%, ctx=15, majf=0, minf=44 00:34:31.861 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename1: (groupid=0, jobs=1): err= 0: pid=1373105: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=565, BW=2263KiB/s (2318kB/s)(22.1MiB/10010msec) 00:34:31.861 slat (nsec): min=7298, max=41151, avg=18581.88, stdev=5466.67 00:34:31.861 clat (usec): min=16451, max=47645, avg=28106.16, stdev=1417.10 00:34:31.861 lat (usec): min=16460, max=47662, avg=28124.75, stdev=1416.85 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.861 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[33162], 99.50th=[34341], 99.90th=[47449], 99.95th=[47449], 00:34:31.861 | 99.99th=[47449] 00:34:31.861 bw ( KiB/s): min= 2048, max= 2392, per=4.15%, avg=2263.60, stdev=80.31, samples=20 00:34:31.861 iops : min= 512, max= 598, avg=565.90, stdev=20.08, samples=20 00:34:31.861 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.861 cpu : usr=98.47%, sys=1.21%, ctx=13, majf=0, minf=29 00:34:31.861 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename1: (groupid=0, jobs=1): err= 0: pid=1373106: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10013msec) 00:34:31.861 slat (nsec): min=9489, max=57296, avg=28084.58, stdev=8340.66 00:34:31.861 clat (usec): min=15867, max=32884, avg=27954.82, stdev=798.86 00:34:31.861 lat (usec): min=15881, max=32910, avg=27982.90, stdev=799.01 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.861 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[29230], 99.50th=[32375], 99.90th=[32900], 99.95th=[32900], 00:34:31.861 | 99.99th=[32900] 00:34:31.861 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2264.20, stdev=59.56, samples=20 00:34:31.861 iops : min= 544, max= 576, avg=566.05, stdev=14.89, samples=20 00:34:31.861 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.861 cpu : usr=98.41%, sys=1.26%, ctx=19, majf=0, minf=30 00:34:31.861 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.861 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.861 filename1: (groupid=0, jobs=1): err= 0: pid=1373107: Tue Nov 19 09:36:31 2024 00:34:31.861 read: IOPS=566, BW=2265KiB/s (2320kB/s)(22.1MiB/10001msec) 00:34:31.861 slat (nsec): min=4056, max=58997, avg=28485.85, stdev=9023.74 00:34:31.861 clat (usec): min=14915, max=47426, avg=27983.98, stdev=1322.73 00:34:31.861 lat (usec): min=14939, max=47438, avg=28012.46, stdev=1322.33 00:34:31.861 clat percentiles (usec): 00:34:31.861 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.861 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.861 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.861 | 99.00th=[29230], 99.50th=[32637], 99.90th=[47449], 99.95th=[47449], 00:34:31.861 | 99.99th=[47449] 00:34:31.861 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2256.84, stdev=76.45, samples=19 00:34:31.861 iops : min= 512, max= 576, avg=564.21, stdev=19.11, samples=19 00:34:31.861 lat (msec) : 20=0.28%, 50=99.72% 00:34:31.861 cpu : usr=98.49%, sys=1.17%, ctx=13, majf=0, minf=29 00:34:31.861 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.861 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename1: (groupid=0, jobs=1): err= 0: pid=1373108: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10019msec) 00:34:31.862 slat (nsec): min=7413, max=59199, avg=28305.15, stdev=9202.56 00:34:31.862 clat (usec): min=14809, max=37349, avg=27904.49, stdev=1317.62 00:34:31.862 lat (usec): min=14835, max=37372, avg=27932.80, stdev=1317.99 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[20579], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:34:31.862 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.862 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.862 | 99.00th=[29492], 99.50th=[32900], 99.90th=[36439], 99.95th=[36439], 00:34:31.862 | 99.99th=[37487] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:34:31.862 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:34:31.862 lat (msec) : 20=0.63%, 50=99.37% 00:34:31.862 cpu : usr=98.54%, sys=1.12%, ctx=13, majf=0, minf=32 00:34:31.862 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename1: (groupid=0, jobs=1): err= 0: pid=1373110: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10008msec) 00:34:31.862 slat (nsec): min=4149, max=58808, avg=26843.66, stdev=8853.56 00:34:31.862 clat (usec): min=8747, max=44910, avg=27932.99, stdev=1584.88 00:34:31.862 lat (usec): min=8764, max=44922, avg=27959.83, stdev=1585.02 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.862 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.862 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.862 | 99.00th=[29230], 99.50th=[32637], 99.90th=[44827], 99.95th=[44827], 00:34:31.862 | 99.99th=[44827] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2432, per=4.15%, avg=2265.60, stdev=73.12, samples=20 00:34:31.862 iops : min= 544, max= 608, avg=566.40, stdev=18.28, samples=20 00:34:31.862 lat (msec) : 10=0.28%, 20=0.28%, 50=99.44% 00:34:31.862 cpu : usr=98.41%, sys=1.25%, ctx=14, majf=0, minf=37 00:34:31.862 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename1: (groupid=0, jobs=1): err= 0: pid=1373111: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10018msec) 00:34:31.862 slat (nsec): min=7486, max=62411, avg=18685.96, stdev=8277.99 00:34:31.862 clat (usec): min=14783, max=32872, avg=27994.98, stdev=1118.17 00:34:31.862 lat (usec): min=14806, max=32888, avg=28013.66, stdev=1117.22 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.862 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.862 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.862 | 99.00th=[28967], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:34:31.862 | 99.99th=[32900] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:34:31.862 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:34:31.862 lat (msec) : 20=0.56%, 50=99.44% 00:34:31.862 cpu : usr=98.55%, sys=1.12%, ctx=23, majf=0, minf=36 00:34:31.862 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename1: (groupid=0, jobs=1): err= 0: pid=1373112: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10002msec) 00:34:31.862 slat (nsec): min=6807, max=58569, avg=21540.06, stdev=10059.32 00:34:31.862 clat (usec): min=9075, max=45774, avg=27954.07, stdev=2925.88 00:34:31.862 lat (usec): min=9107, max=45789, avg=27975.61, stdev=2926.24 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[13566], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:34:31.862 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.862 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28705], 95.00th=[28967], 00:34:31.862 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:34:31.862 | 99.99th=[45876] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2384, per=4.17%, avg=2272.84, stdev=61.15, samples=19 00:34:31.862 iops : min= 544, max= 596, avg=568.21, stdev=15.29, samples=19 00:34:31.862 lat (msec) : 10=0.11%, 20=2.25%, 50=97.64% 00:34:31.862 cpu : usr=98.56%, sys=1.10%, ctx=18, majf=0, minf=27 00:34:31.862 IO depths : 1=5.2%, 2=11.1%, 4=23.8%, 8=52.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename2: (groupid=0, jobs=1): err= 0: pid=1373113: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10008msec) 00:34:31.862 slat (nsec): min=4151, max=59018, avg=22923.60, stdev=9408.66 00:34:31.862 clat (usec): min=8773, max=57770, avg=27873.01, stdev=2301.55 00:34:31.862 lat (usec): min=8787, max=57782, avg=27895.93, stdev=2301.87 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[17171], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:34:31.862 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.862 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.862 | 99.00th=[35914], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:34:31.862 | 99.99th=[57934] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2274.40, stdev=71.98, samples=20 00:34:31.862 iops : min= 544, max= 608, avg=568.60, stdev=18.00, samples=20 00:34:31.862 lat (msec) : 10=0.28%, 20=1.16%, 50=98.53%, 100=0.04% 00:34:31.862 cpu : usr=98.62%, sys=1.05%, ctx=11, majf=0, minf=53 00:34:31.862 IO depths : 1=5.5%, 2=11.3%, 4=23.3%, 8=52.7%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename2: (groupid=0, jobs=1): err= 0: pid=1373114: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=584, BW=2337KiB/s (2393kB/s)(22.9MiB/10012msec) 00:34:31.862 slat (nsec): min=6522, max=53718, avg=10243.32, stdev=4211.48 00:34:31.862 clat (usec): min=3440, max=33017, avg=27293.79, stdev=3562.45 00:34:31.862 lat (usec): min=3458, max=33026, avg=27304.03, stdev=3561.82 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[ 6652], 5.00th=[22938], 10.00th=[27657], 20.00th=[27919], 00:34:31.862 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:34:31.862 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.862 | 99.00th=[30540], 99.50th=[31065], 99.90th=[32637], 99.95th=[32637], 00:34:31.862 | 99.99th=[32900] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2816, per=4.28%, avg=2333.60, stdev=157.82, samples=20 00:34:31.862 iops : min= 544, max= 704, avg=583.40, stdev=39.45, samples=20 00:34:31.862 lat (msec) : 4=0.41%, 10=1.30%, 20=2.94%, 50=95.35% 00:34:31.862 cpu : usr=98.25%, sys=1.39%, ctx=35, majf=0, minf=56 00:34:31.862 IO depths : 1=5.7%, 2=11.6%, 4=23.8%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename2: (groupid=0, jobs=1): err= 0: pid=1373115: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10019msec) 00:34:31.862 slat (nsec): min=11025, max=58997, avg=27227.25, stdev=9289.30 00:34:31.862 clat (usec): min=14720, max=32857, avg=27919.61, stdev=1116.03 00:34:31.862 lat (usec): min=14738, max=32879, avg=27946.84, stdev=1115.78 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.862 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.862 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.862 | 99.00th=[28967], 99.50th=[32375], 99.90th=[32637], 99.95th=[32900], 00:34:31.862 | 99.99th=[32900] 00:34:31.862 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:34:31.862 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:34:31.862 lat (msec) : 20=0.56%, 50=99.44% 00:34:31.862 cpu : usr=98.50%, sys=1.16%, ctx=10, majf=0, minf=30 00:34:31.862 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.862 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.862 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.862 filename2: (groupid=0, jobs=1): err= 0: pid=1373116: Tue Nov 19 09:36:31 2024 00:34:31.862 read: IOPS=574, BW=2298KiB/s (2353kB/s)(22.5MiB/10025msec) 00:34:31.862 slat (nsec): min=7180, max=60303, avg=21174.21, stdev=8673.58 00:34:31.862 clat (usec): min=3415, max=32821, avg=27679.95, stdev=2812.50 00:34:31.862 lat (usec): min=3428, max=32841, avg=27701.12, stdev=2812.29 00:34:31.862 clat percentiles (usec): 00:34:31.862 | 1.00th=[ 4752], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.862 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.863 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.863 | 99.00th=[29230], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:34:31.863 | 99.99th=[32900] 00:34:31.863 bw ( KiB/s): min= 2176, max= 2816, per=4.21%, avg=2297.60, stdev=134.41, samples=20 00:34:31.863 iops : min= 544, max= 704, avg=574.40, stdev=33.60, samples=20 00:34:31.863 lat (msec) : 4=0.78%, 10=0.33%, 20=0.83%, 50=98.06% 00:34:31.863 cpu : usr=98.31%, sys=1.32%, ctx=13, majf=0, minf=30 00:34:31.863 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.863 filename2: (groupid=0, jobs=1): err= 0: pid=1373118: Tue Nov 19 09:36:31 2024 00:34:31.863 read: IOPS=572, BW=2292KiB/s (2347kB/s)(22.4MiB/10008msec) 00:34:31.863 slat (nsec): min=6760, max=67585, avg=22548.38, stdev=11753.82 00:34:31.863 clat (usec): min=14913, max=45379, avg=27727.18, stdev=2344.35 00:34:31.863 lat (usec): min=14928, max=45406, avg=27749.73, stdev=2345.96 00:34:31.863 clat percentiles (usec): 00:34:31.863 | 1.00th=[17433], 5.00th=[26870], 10.00th=[27657], 20.00th=[27657], 00:34:31.863 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.863 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.863 | 99.00th=[32637], 99.50th=[40633], 99.90th=[45351], 99.95th=[45351], 00:34:31.863 | 99.99th=[45351] 00:34:31.863 bw ( KiB/s): min= 2176, max= 2608, per=4.19%, avg=2287.20, stdev=94.23, samples=20 00:34:31.863 iops : min= 544, max= 652, avg=571.80, stdev=23.56, samples=20 00:34:31.863 lat (msec) : 20=3.17%, 50=96.83% 00:34:31.863 cpu : usr=98.20%, sys=1.46%, ctx=12, majf=0, minf=43 00:34:31.863 IO depths : 1=5.2%, 2=11.2%, 4=24.0%, 8=52.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:34:31.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.863 filename2: (groupid=0, jobs=1): err= 0: pid=1373119: Tue Nov 19 09:36:31 2024 00:34:31.863 read: IOPS=566, BW=2264KiB/s (2319kB/s)(22.1MiB/10005msec) 00:34:31.863 slat (nsec): min=4167, max=60654, avg=19621.11, stdev=6355.61 00:34:31.863 clat (usec): min=14989, max=50462, avg=28085.58, stdev=1308.22 00:34:31.863 lat (usec): min=15018, max=50474, avg=28105.20, stdev=1307.50 00:34:31.863 clat percentiles (usec): 00:34:31.863 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.863 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.863 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.863 | 99.00th=[32900], 99.50th=[33817], 99.90th=[42730], 99.95th=[50594], 00:34:31.863 | 99.99th=[50594] 00:34:31.863 bw ( KiB/s): min= 2176, max= 2416, per=4.15%, avg=2264.80, stdev=71.27, samples=20 00:34:31.863 iops : min= 544, max= 604, avg=566.20, stdev=17.82, samples=20 00:34:31.863 lat (msec) : 20=0.28%, 50=99.65%, 100=0.07% 00:34:31.863 cpu : usr=98.33%, sys=1.34%, ctx=13, majf=0, minf=33 00:34:31.863 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:31.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.863 filename2: (groupid=0, jobs=1): err= 0: pid=1373120: Tue Nov 19 09:36:31 2024 00:34:31.863 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10019msec) 00:34:31.863 slat (nsec): min=7139, max=70386, avg=28578.32, stdev=9650.95 00:34:31.863 clat (usec): min=14753, max=35046, avg=27895.86, stdev=1155.60 00:34:31.863 lat (usec): min=14772, max=35081, avg=27924.43, stdev=1155.57 00:34:31.863 clat percentiles (usec): 00:34:31.863 | 1.00th=[26346], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:34:31.863 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.863 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:34:31.863 | 99.00th=[29230], 99.50th=[32375], 99.90th=[33817], 99.95th=[34866], 00:34:31.863 | 99.99th=[34866] 00:34:31.863 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2272.00, stdev=56.87, samples=20 00:34:31.863 iops : min= 544, max= 576, avg=568.00, stdev=14.22, samples=20 00:34:31.863 lat (msec) : 20=0.56%, 50=99.44% 00:34:31.863 cpu : usr=98.39%, sys=1.26%, ctx=15, majf=0, minf=27 00:34:31.863 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:31.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.863 filename2: (groupid=0, jobs=1): err= 0: pid=1373122: Tue Nov 19 09:36:31 2024 00:34:31.863 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10007msec) 00:34:31.863 slat (nsec): min=13359, max=61981, avg=17290.17, stdev=5288.01 00:34:31.863 clat (usec): min=8030, max=49587, avg=28018.89, stdev=1898.76 00:34:31.863 lat (usec): min=8045, max=49634, avg=28036.18, stdev=1899.25 00:34:31.863 clat percentiles (usec): 00:34:31.863 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:34:31.863 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:34:31.863 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28967], 00:34:31.863 | 99.00th=[29492], 99.50th=[32900], 99.90th=[49546], 99.95th=[49546], 00:34:31.863 | 99.99th=[49546] 00:34:31.863 bw ( KiB/s): min= 2052, max= 2432, per=4.15%, avg=2265.80, stdev=83.55, samples=20 00:34:31.863 iops : min= 513, max= 608, avg=566.45, stdev=20.89, samples=20 00:34:31.863 lat (msec) : 10=0.28%, 20=0.35%, 50=99.37% 00:34:31.863 cpu : usr=98.57%, sys=1.06%, ctx=13, majf=0, minf=30 00:34:31.863 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.863 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.863 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.863 00:34:31.863 Run status group 0 (all jobs): 00:34:31.863 READ: bw=53.3MiB/s (55.8MB/s), 2263KiB/s-2342KiB/s (2318kB/s-2398kB/s), io=535MiB (561MB), run=10001-10048msec 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:31.863 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 bdev_null0 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 [2024-11-19 09:36:31.861987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 bdev_null1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:31.864 { 00:34:31.864 "params": { 00:34:31.864 "name": "Nvme$subsystem", 00:34:31.864 "trtype": "$TEST_TRANSPORT", 00:34:31.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.864 "adrfam": "ipv4", 00:34:31.864 "trsvcid": "$NVMF_PORT", 00:34:31.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.864 "hdgst": ${hdgst:-false}, 00:34:31.864 "ddgst": ${ddgst:-false} 00:34:31.864 }, 00:34:31.864 "method": "bdev_nvme_attach_controller" 00:34:31.864 } 00:34:31.864 EOF 00:34:31.864 )") 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:31.864 { 00:34:31.864 "params": { 00:34:31.864 "name": "Nvme$subsystem", 00:34:31.864 "trtype": "$TEST_TRANSPORT", 00:34:31.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.864 "adrfam": "ipv4", 00:34:31.864 "trsvcid": "$NVMF_PORT", 00:34:31.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.864 "hdgst": ${hdgst:-false}, 00:34:31.864 "ddgst": ${ddgst:-false} 00:34:31.864 }, 00:34:31.864 "method": "bdev_nvme_attach_controller" 00:34:31.864 } 00:34:31.864 EOF 00:34:31.864 )") 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:31.864 "params": { 00:34:31.864 "name": "Nvme0", 00:34:31.864 "trtype": "tcp", 00:34:31.864 "traddr": "10.0.0.2", 00:34:31.864 "adrfam": "ipv4", 00:34:31.864 "trsvcid": "4420", 00:34:31.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.864 "hdgst": false, 00:34:31.864 "ddgst": false 00:34:31.864 }, 00:34:31.864 "method": "bdev_nvme_attach_controller" 00:34:31.864 },{ 00:34:31.864 "params": { 00:34:31.864 "name": "Nvme1", 00:34:31.864 "trtype": "tcp", 00:34:31.864 "traddr": "10.0.0.2", 00:34:31.864 "adrfam": "ipv4", 00:34:31.864 "trsvcid": "4420", 00:34:31.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:31.864 "hdgst": false, 00:34:31.864 "ddgst": false 00:34:31.864 }, 00:34:31.864 "method": "bdev_nvme_attach_controller" 00:34:31.864 }' 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.864 09:36:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.864 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.864 ... 00:34:31.864 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.864 ... 00:34:31.864 fio-3.35 00:34:31.864 Starting 4 threads 00:34:37.124 00:34:37.124 filename0: (groupid=0, jobs=1): err= 0: pid=1374952: Tue Nov 19 09:36:37 2024 00:34:37.124 read: IOPS=2715, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:34:37.124 slat (usec): min=6, max=211, avg=12.33, stdev= 9.18 00:34:37.124 clat (usec): min=720, max=5414, avg=2907.02, stdev=452.45 00:34:37.124 lat (usec): min=730, max=5426, avg=2919.35, stdev=453.11 00:34:37.124 clat percentiles (usec): 00:34:37.124 | 1.00th=[ 1647], 5.00th=[ 2180], 10.00th=[ 2343], 20.00th=[ 2540], 00:34:37.124 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3032], 00:34:37.124 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3654], 00:34:37.124 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 4948], 00:34:37.124 | 99.99th=[ 5407] 00:34:37.124 bw ( KiB/s): min=20624, max=22608, per=26.18%, avg=21728.00, stdev=675.51, samples=9 00:34:37.124 iops : min= 2578, max= 2826, avg=2716.00, stdev=84.44, samples=9 00:34:37.124 lat (usec) : 750=0.01%, 1000=0.09% 00:34:37.124 lat (msec) : 2=2.64%, 4=95.71%, 10=1.55% 00:34:37.124 cpu : usr=96.90%, sys=2.78%, ctx=7, majf=0, minf=9 00:34:37.124 IO depths : 1=0.7%, 2=6.8%, 4=64.5%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.124 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.124 issued rwts: total=13581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.124 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.124 filename0: (groupid=0, jobs=1): err= 0: pid=1374953: Tue Nov 19 09:36:37 2024 00:34:37.124 read: IOPS=2495, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5001msec) 00:34:37.124 slat (nsec): min=6046, max=67685, avg=12577.63, stdev=9449.88 00:34:37.124 clat (usec): min=791, max=5799, avg=3168.09, stdev=446.96 00:34:37.124 lat (usec): min=801, max=5836, avg=3180.67, stdev=446.31 00:34:37.124 clat percentiles (usec): 00:34:37.124 | 1.00th=[ 2040], 5.00th=[ 2507], 10.00th=[ 2737], 20.00th=[ 2933], 00:34:37.124 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:34:37.124 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 3982], 00:34:37.124 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:37.124 | 99.99th=[ 5800] 00:34:37.124 bw ( KiB/s): min=19168, max=20368, per=23.99%, avg=19909.33, stdev=454.03, samples=9 00:34:37.124 iops : min= 2396, max= 2546, avg=2488.67, stdev=56.75, samples=9 00:34:37.124 lat (usec) : 1000=0.01% 00:34:37.124 lat (msec) : 2=0.81%, 4=94.38%, 10=4.80% 00:34:37.124 cpu : usr=96.82%, sys=2.86%, ctx=7, majf=0, minf=9 00:34:37.124 IO depths : 1=0.2%, 2=3.6%, 4=68.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.124 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.124 issued rwts: total=12479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.124 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.124 filename1: (groupid=0, jobs=1): err= 0: pid=1374954: Tue Nov 19 09:36:37 2024 00:34:37.124 read: IOPS=2657, BW=20.8MiB/s (21.8MB/s)(104MiB/5001msec) 00:34:37.124 slat (nsec): min=6150, max=70396, avg=12237.14, stdev=6265.06 00:34:37.124 clat (usec): min=772, max=5426, avg=2973.65, stdev=448.61 00:34:37.124 lat (usec): min=782, max=5447, avg=2985.89, stdev=448.71 00:34:37.124 clat percentiles (usec): 00:34:37.124 | 1.00th=[ 1860], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:34:37.124 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3032], 60.00th=[ 3064], 00:34:37.124 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3687], 00:34:37.124 | 99.00th=[ 4359], 99.50th=[ 4686], 99.90th=[ 5276], 99.95th=[ 5276], 00:34:37.124 | 99.99th=[ 5342] 00:34:37.124 bw ( KiB/s): min=20544, max=21792, per=25.74%, avg=21363.56, stdev=416.60, samples=9 00:34:37.124 iops : min= 2568, max= 2724, avg=2670.44, stdev=52.07, samples=9 00:34:37.124 lat (usec) : 1000=0.02% 00:34:37.124 lat (msec) : 2=1.63%, 4=96.27%, 10=2.08% 00:34:37.124 cpu : usr=97.02%, sys=2.64%, ctx=8, majf=0, minf=9 00:34:37.124 IO depths : 1=0.3%, 2=7.3%, 4=63.2%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.124 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.124 issued rwts: total=13292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.124 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.124 filename1: (groupid=0, jobs=1): err= 0: pid=1374955: Tue Nov 19 09:36:37 2024 00:34:37.124 read: IOPS=2504, BW=19.6MiB/s (20.5MB/s)(97.9MiB/5001msec) 00:34:37.124 slat (nsec): min=6042, max=67198, avg=12572.38, stdev=9337.84 00:34:37.124 clat (usec): min=678, max=5966, avg=3155.90, stdev=490.83 00:34:37.124 lat (usec): min=685, max=5991, avg=3168.47, stdev=490.41 00:34:37.124 clat percentiles (usec): 00:34:37.124 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2868], 00:34:37.124 | 30.00th=[ 2999], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:37.124 | 70.00th=[ 3261], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4080], 00:34:37.124 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5669], 00:34:37.124 | 99.99th=[ 5866] 00:34:37.124 bw ( KiB/s): min=19648, max=20240, per=24.07%, avg=19975.89, stdev=211.24, samples=9 00:34:37.124 iops : min= 2456, max= 2530, avg=2496.89, stdev=26.44, samples=9 00:34:37.124 lat (usec) : 750=0.02%, 1000=0.08% 00:34:37.124 lat (msec) : 2=0.81%, 4=93.26%, 10=5.83% 00:34:37.125 cpu : usr=96.44%, sys=3.22%, ctx=8, majf=0, minf=9 00:34:37.125 IO depths : 1=0.1%, 2=4.3%, 4=67.2%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.125 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.125 issued rwts: total=12526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.125 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:37.125 00:34:37.125 Run status group 0 (all jobs): 00:34:37.125 READ: bw=81.0MiB/s (85.0MB/s), 19.5MiB/s-21.2MiB/s (20.4MB/s-22.2MB/s), io=405MiB (425MB), run=5001-5001msec 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.125 00:34:37.125 real 0m24.330s 00:34:37.125 user 4m52.374s 00:34:37.125 sys 0m5.091s 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:37.125 09:36:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.125 ************************************ 00:34:37.125 END TEST fio_dif_rand_params 00:34:37.125 ************************************ 00:34:37.384 09:36:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:37.384 09:36:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:37.384 09:36:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:37.384 09:36:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.384 ************************************ 00:34:37.384 START TEST fio_dif_digest 00:34:37.384 ************************************ 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.384 bdev_null0 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.384 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:37.384 [2024-11-19 09:36:38.257343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:37.385 { 00:34:37.385 "params": { 00:34:37.385 "name": "Nvme$subsystem", 00:34:37.385 "trtype": "$TEST_TRANSPORT", 00:34:37.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.385 "adrfam": "ipv4", 00:34:37.385 "trsvcid": "$NVMF_PORT", 00:34:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.385 "hdgst": ${hdgst:-false}, 00:34:37.385 "ddgst": ${ddgst:-false} 00:34:37.385 }, 00:34:37.385 "method": "bdev_nvme_attach_controller" 00:34:37.385 } 00:34:37.385 EOF 00:34:37.385 )") 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:37.385 "params": { 00:34:37.385 "name": "Nvme0", 00:34:37.385 "trtype": "tcp", 00:34:37.385 "traddr": "10.0.0.2", 00:34:37.385 "adrfam": "ipv4", 00:34:37.385 "trsvcid": "4420", 00:34:37.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.385 "hdgst": true, 00:34:37.385 "ddgst": true 00:34:37.385 }, 00:34:37.385 "method": "bdev_nvme_attach_controller" 00:34:37.385 }' 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.385 09:36:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.643 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:37.643 ... 00:34:37.643 fio-3.35 00:34:37.643 Starting 3 threads 00:34:49.841 00:34:49.841 filename0: (groupid=0, jobs=1): err= 0: pid=1376226: Tue Nov 19 09:36:49 2024 00:34:49.841 read: IOPS=289, BW=36.1MiB/s (37.9MB/s)(363MiB/10044msec) 00:34:49.841 slat (nsec): min=6435, max=34319, avg=11994.19, stdev=1806.15 00:34:49.841 clat (usec): min=7951, max=51662, avg=10351.14, stdev=1272.55 00:34:49.841 lat (usec): min=7965, max=51670, avg=10363.13, stdev=1272.50 00:34:49.841 clat percentiles (usec): 00:34:49.841 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:34:49.841 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:34:49.842 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:34:49.842 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14877], 99.95th=[44303], 00:34:49.842 | 99.99th=[51643] 00:34:49.842 bw ( KiB/s): min=35072, max=38656, per=35.11%, avg=37132.80, stdev=1058.70, samples=20 00:34:49.842 iops : min= 274, max= 302, avg=290.10, stdev= 8.27, samples=20 00:34:49.842 lat (msec) : 10=33.41%, 20=66.52%, 50=0.03%, 100=0.03% 00:34:49.842 cpu : usr=94.44%, sys=5.25%, ctx=26, majf=0, minf=50 00:34:49.842 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.842 issued rwts: total=2903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.842 filename0: (groupid=0, jobs=1): err= 0: pid=1376227: Tue Nov 19 09:36:49 2024 00:34:49.842 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10044msec) 00:34:49.842 slat (nsec): min=6427, max=35603, avg=11569.50, stdev=1835.65 00:34:49.842 clat (usec): min=7624, max=44011, avg=10976.92, stdev=984.26 00:34:49.842 lat (usec): min=7631, max=44023, avg=10988.49, stdev=984.32 00:34:49.842 clat percentiles (usec): 00:34:49.842 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:34:49.842 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:34:49.842 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:34:49.842 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13829], 99.95th=[15139], 00:34:49.842 | 99.99th=[43779] 00:34:49.842 bw ( KiB/s): min=34560, max=35584, per=33.08%, avg=34982.40, stdev=355.06, samples=20 00:34:49.842 iops : min= 270, max= 278, avg=273.30, stdev= 2.77, samples=20 00:34:49.842 lat (msec) : 10=9.80%, 20=90.16%, 50=0.04% 00:34:49.842 cpu : usr=94.47%, sys=5.23%, ctx=23, majf=0, minf=25 00:34:49.842 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.842 issued rwts: total=2734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.842 filename0: (groupid=0, jobs=1): err= 0: pid=1376228: Tue Nov 19 09:36:49 2024 00:34:49.842 read: IOPS=265, BW=33.1MiB/s (34.7MB/s)(333MiB/10044msec) 00:34:49.842 slat (nsec): min=6398, max=53334, avg=11594.91, stdev=1941.41 00:34:49.842 clat (usec): min=8361, max=49849, avg=11289.80, stdev=1270.41 00:34:49.842 lat (usec): min=8373, max=49856, avg=11301.40, stdev=1270.36 00:34:49.842 clat percentiles (usec): 00:34:49.842 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:49.842 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:34:49.842 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:34:49.842 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[46400], 00:34:49.842 | 99.99th=[50070] 00:34:49.842 bw ( KiB/s): min=33280, max=35584, per=32.19%, avg=34048.00, stdev=593.15, samples=20 00:34:49.842 iops : min= 260, max= 278, avg=266.00, stdev= 4.63, samples=20 00:34:49.842 lat (msec) : 10=5.11%, 20=94.82%, 50=0.08% 00:34:49.842 cpu : usr=94.41%, sys=5.29%, ctx=15, majf=0, minf=94 00:34:49.842 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.842 issued rwts: total=2662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.842 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.842 00:34:49.842 Run status group 0 (all jobs): 00:34:49.842 READ: bw=103MiB/s (108MB/s), 33.1MiB/s-36.1MiB/s (34.7MB/s-37.9MB/s), io=1037MiB (1088MB), run=10044-10044msec 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.842 00:34:49.842 real 0m11.318s 00:34:49.842 user 0m35.562s 00:34:49.842 sys 0m1.935s 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:49.842 09:36:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.842 ************************************ 00:34:49.842 END TEST fio_dif_digest 00:34:49.842 ************************************ 00:34:49.842 09:36:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:49.842 09:36:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.842 rmmod nvme_tcp 00:34:49.842 rmmod nvme_fabrics 00:34:49.842 rmmod nvme_keyring 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1367106 ']' 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1367106 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 1367106 ']' 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 1367106 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1367106 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1367106' 00:34:49.842 killing process with pid 1367106 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@971 -- # kill 1367106 00:34:49.842 09:36:49 nvmf_dif -- common/autotest_common.sh@976 -- # wait 1367106 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:49.842 09:36:49 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:51.748 Waiting for block devices as requested 00:34:51.748 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:51.748 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:51.748 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:52.007 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:52.007 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:52.007 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:52.266 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:52.266 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:52.267 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:52.525 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:52.525 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:52.526 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:52.526 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:52.784 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:52.784 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:52.784 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.043 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.043 09:36:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.043 09:36:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:53.043 09:36:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.578 09:36:56 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.578 00:34:55.578 real 1m14.387s 00:34:55.578 user 7m10.380s 00:34:55.578 sys 0m21.048s 00:34:55.578 09:36:56 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:55.578 09:36:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.578 ************************************ 00:34:55.578 END TEST nvmf_dif 00:34:55.578 ************************************ 00:34:55.578 09:36:56 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:55.578 09:36:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:55.578 09:36:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:55.578 09:36:56 -- common/autotest_common.sh@10 -- # set +x 00:34:55.578 ************************************ 00:34:55.578 START TEST nvmf_abort_qd_sizes 00:34:55.578 ************************************ 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:55.578 * Looking for test storage... 00:34:55.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:55.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.578 --rc genhtml_branch_coverage=1 00:34:55.578 --rc genhtml_function_coverage=1 00:34:55.578 --rc genhtml_legend=1 00:34:55.578 --rc geninfo_all_blocks=1 00:34:55.578 --rc geninfo_unexecuted_blocks=1 00:34:55.578 00:34:55.578 ' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:55.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.578 --rc genhtml_branch_coverage=1 00:34:55.578 --rc genhtml_function_coverage=1 00:34:55.578 --rc genhtml_legend=1 00:34:55.578 --rc geninfo_all_blocks=1 00:34:55.578 --rc geninfo_unexecuted_blocks=1 00:34:55.578 00:34:55.578 ' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:55.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.578 --rc genhtml_branch_coverage=1 00:34:55.578 --rc genhtml_function_coverage=1 00:34:55.578 --rc genhtml_legend=1 00:34:55.578 --rc geninfo_all_blocks=1 00:34:55.578 --rc geninfo_unexecuted_blocks=1 00:34:55.578 00:34:55.578 ' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:55.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.578 --rc genhtml_branch_coverage=1 00:34:55.578 --rc genhtml_function_coverage=1 00:34:55.578 --rc genhtml_legend=1 00:34:55.578 --rc geninfo_all_blocks=1 00:34:55.578 --rc geninfo_unexecuted_blocks=1 00:34:55.578 00:34:55.578 ' 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.578 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:55.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:55.579 09:36:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:02.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:02.146 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:02.146 Found net devices under 0000:86:00.0: cvl_0_0 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:02.146 Found net devices under 0000:86:00.1: cvl_0_1 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:02.146 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.147 09:37:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:35:02.147 00:35:02.147 --- 10.0.0.2 ping statistics --- 00:35:02.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.147 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:35:02.147 00:35:02.147 --- 10.0.0.1 ping statistics --- 00:35:02.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.147 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:02.147 09:37:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:04.053 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:04.053 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:04.053 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:04.053 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:04.053 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:04.053 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:04.054 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:04.992 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:04.992 09:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.992 09:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.992 09:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.992 09:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.992 09:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.992 09:37:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1384036 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1384036 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 1384036 ']' 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:04.992 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.251 [2024-11-19 09:37:06.071462] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:35:05.251 [2024-11-19 09:37:06.071514] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.251 [2024-11-19 09:37:06.149564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.251 [2024-11-19 09:37:06.194209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.251 [2024-11-19 09:37:06.194248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.251 [2024-11-19 09:37:06.194255] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.251 [2024-11-19 09:37:06.194261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.251 [2024-11-19 09:37:06.194266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.251 [2024-11-19 09:37:06.195850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.251 [2024-11-19 09:37:06.195993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.251 [2024-11-19 09:37:06.196038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.251 [2024-11-19 09:37:06.196039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.251 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:05.251 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:35:05.251 09:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:05.251 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:05.251 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:05.509 09:37:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.509 ************************************ 00:35:05.509 START TEST spdk_target_abort 00:35:05.509 ************************************ 00:35:05.509 09:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:35:05.509 09:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:05.509 09:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:05.509 09:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.509 09:37:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.790 spdk_targetn1 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.790 [2024-11-19 09:37:09.210003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.790 [2024-11-19 09:37:09.256124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:08.790 09:37:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.075 Initializing NVMe Controllers 00:35:12.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:12.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:12.075 Initialization complete. Launching workers. 00:35:12.075 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17613, failed: 0 00:35:12.075 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1392, failed to submit 16221 00:35:12.075 success 757, unsuccessful 635, failed 0 00:35:12.075 09:37:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.075 09:37:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.358 Initializing NVMe Controllers 00:35:15.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.358 Initialization complete. Launching workers. 00:35:15.358 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8562, failed: 0 00:35:15.358 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7299 00:35:15.358 success 315, unsuccessful 948, failed 0 00:35:15.358 09:37:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.358 09:37:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.641 Initializing NVMe Controllers 00:35:18.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.641 Initialization complete. Launching workers. 00:35:18.641 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37784, failed: 0 00:35:18.641 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2773, failed to submit 35011 00:35:18.641 success 590, unsuccessful 2183, failed 0 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.641 09:37:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1384036 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 1384036 ']' 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 1384036 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1384036 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1384036' 00:35:19.575 killing process with pid 1384036 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 1384036 00:35:19.575 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 1384036 00:35:19.835 00:35:19.835 real 0m14.263s 00:35:19.835 user 0m54.297s 00:35:19.835 sys 0m2.686s 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:19.835 ************************************ 00:35:19.835 END TEST spdk_target_abort 00:35:19.835 ************************************ 00:35:19.835 09:37:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:19.835 09:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:19.835 09:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:19.835 09:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:19.835 ************************************ 00:35:19.835 START TEST kernel_target_abort 00:35:19.835 ************************************ 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:19.835 09:37:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:22.371 Waiting for block devices as requested 00:35:22.631 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:22.631 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:22.631 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:22.890 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:22.890 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:22.890 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.149 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:23.149 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.149 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.149 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:23.408 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:23.408 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:23.408 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:23.667 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.667 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:23.667 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:23.667 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:23.927 No valid GPT data, bailing 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:23.927 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:23.927 00:35:23.927 Discovery Log Number of Records 2, Generation counter 2 00:35:23.927 =====Discovery Log Entry 0====== 00:35:23.927 trtype: tcp 00:35:23.927 adrfam: ipv4 00:35:23.927 subtype: current discovery subsystem 00:35:23.927 treq: not specified, sq flow control disable supported 00:35:23.927 portid: 1 00:35:23.927 trsvcid: 4420 00:35:23.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:23.927 traddr: 10.0.0.1 00:35:23.927 eflags: none 00:35:23.927 sectype: none 00:35:23.927 =====Discovery Log Entry 1====== 00:35:23.927 trtype: tcp 00:35:23.927 adrfam: ipv4 00:35:23.927 subtype: nvme subsystem 00:35:23.927 treq: not specified, sq flow control disable supported 00:35:23.927 portid: 1 00:35:23.927 trsvcid: 4420 00:35:23.927 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:23.927 traddr: 10.0.0.1 00:35:23.927 eflags: none 00:35:23.927 sectype: none 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.186 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:24.187 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.187 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:24.187 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.187 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.187 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:24.187 09:37:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.474 Initializing NVMe Controllers 00:35:27.474 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.474 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.474 Initialization complete. Launching workers. 00:35:27.474 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92740, failed: 0 00:35:27.474 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92740, failed to submit 0 00:35:27.474 success 0, unsuccessful 92740, failed 0 00:35:27.474 09:37:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.474 09:37:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:30.881 Initializing NVMe Controllers 00:35:30.881 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:30.881 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:30.881 Initialization complete. Launching workers. 00:35:30.881 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145218, failed: 0 00:35:30.881 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36366, failed to submit 108852 00:35:30.881 success 0, unsuccessful 36366, failed 0 00:35:30.881 09:37:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:30.881 09:37:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.415 Initializing NVMe Controllers 00:35:33.415 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:33.415 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:33.415 Initialization complete. Launching workers. 00:35:33.415 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139239, failed: 0 00:35:33.415 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34890, failed to submit 104349 00:35:33.415 success 0, unsuccessful 34890, failed 0 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:33.415 09:37:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.709 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:36.709 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.278 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.278 00:35:37.278 real 0m17.528s 00:35:37.278 user 0m9.288s 00:35:37.278 sys 0m4.928s 00:35:37.278 09:37:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.278 09:37:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.278 ************************************ 00:35:37.278 END TEST kernel_target_abort 00:35:37.278 ************************************ 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.278 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.278 rmmod nvme_tcp 00:35:37.278 rmmod nvme_fabrics 00:35:37.278 rmmod nvme_keyring 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1384036 ']' 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1384036 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 1384036 ']' 00:35:37.537 09:37:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 1384036 00:35:37.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1384036) - No such process 00:35:37.538 09:37:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 1384036 is not found' 00:35:37.538 Process with pid 1384036 is not found 00:35:37.538 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:37.538 09:37:38 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:40.071 Waiting for block devices as requested 00:35:40.071 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:40.330 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:40.330 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:40.330 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:40.588 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:40.588 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:40.588 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:40.848 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:40.848 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:40.848 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:40.848 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:41.106 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:41.106 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:41.106 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:41.365 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:41.365 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:41.365 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:41.623 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:41.623 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:41.623 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:41.623 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:41.623 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:41.624 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:41.624 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:41.624 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:41.624 09:37:42 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.624 09:37:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:41.624 09:37:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:43.528 09:37:44 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:43.528 00:35:43.528 real 0m48.417s 00:35:43.528 user 1m7.905s 00:35:43.528 sys 0m16.383s 00:35:43.528 09:37:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:43.528 09:37:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:43.528 ************************************ 00:35:43.528 END TEST nvmf_abort_qd_sizes 00:35:43.528 ************************************ 00:35:43.528 09:37:44 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:43.528 09:37:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:43.528 09:37:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:43.528 09:37:44 -- common/autotest_common.sh@10 -- # set +x 00:35:43.787 ************************************ 00:35:43.787 START TEST keyring_file 00:35:43.787 ************************************ 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:43.787 * Looking for test storage... 00:35:43.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:43.787 09:37:44 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:43.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.787 --rc genhtml_branch_coverage=1 00:35:43.787 --rc genhtml_function_coverage=1 00:35:43.787 --rc genhtml_legend=1 00:35:43.787 --rc geninfo_all_blocks=1 00:35:43.787 --rc geninfo_unexecuted_blocks=1 00:35:43.787 00:35:43.787 ' 00:35:43.787 09:37:44 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:43.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.787 --rc genhtml_branch_coverage=1 00:35:43.787 --rc genhtml_function_coverage=1 00:35:43.788 --rc genhtml_legend=1 00:35:43.788 --rc geninfo_all_blocks=1 00:35:43.788 --rc geninfo_unexecuted_blocks=1 00:35:43.788 00:35:43.788 ' 00:35:43.788 09:37:44 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:43.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.788 --rc genhtml_branch_coverage=1 00:35:43.788 --rc genhtml_function_coverage=1 00:35:43.788 --rc genhtml_legend=1 00:35:43.788 --rc geninfo_all_blocks=1 00:35:43.788 --rc geninfo_unexecuted_blocks=1 00:35:43.788 00:35:43.788 ' 00:35:43.788 09:37:44 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:43.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:43.788 --rc genhtml_branch_coverage=1 00:35:43.788 --rc genhtml_function_coverage=1 00:35:43.788 --rc genhtml_legend=1 00:35:43.788 --rc geninfo_all_blocks=1 00:35:43.788 --rc geninfo_unexecuted_blocks=1 00:35:43.788 00:35:43.788 ' 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:43.788 09:37:44 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:43.788 09:37:44 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:43.788 09:37:44 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:43.788 09:37:44 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:43.788 09:37:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.788 09:37:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.788 09:37:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.788 09:37:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:43.788 09:37:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:43.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:43.788 09:37:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QVIELgkpEu 00:35:43.788 09:37:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:43.788 09:37:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QVIELgkpEu 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QVIELgkpEu 00:35:44.047 09:37:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QVIELgkpEu 00:35:44.047 09:37:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fQC6oJ7sBj 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:44.047 09:37:44 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:44.047 09:37:44 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:44.047 09:37:44 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:44.047 09:37:44 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:44.047 09:37:44 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:44.047 09:37:44 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fQC6oJ7sBj 00:35:44.047 09:37:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fQC6oJ7sBj 00:35:44.047 09:37:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.fQC6oJ7sBj 00:35:44.047 09:37:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:44.047 09:37:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=1392814 00:35:44.047 09:37:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1392814 00:35:44.047 09:37:44 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1392814 ']' 00:35:44.047 09:37:44 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.047 09:37:44 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:44.047 09:37:44 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.047 09:37:44 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:44.047 09:37:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.047 [2024-11-19 09:37:44.938731] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:35:44.047 [2024-11-19 09:37:44.938778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392814 ] 00:35:44.047 [2024-11-19 09:37:45.010058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.047 [2024-11-19 09:37:45.052703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:35:44.306 09:37:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.306 [2024-11-19 09:37:45.266110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.306 null0 00:35:44.306 [2024-11-19 09:37:45.298162] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:44.306 [2024-11-19 09:37:45.298550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:44.306 09:37:45 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.306 [2024-11-19 09:37:45.326237] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:44.306 request: 00:35:44.306 { 00:35:44.306 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.306 "secure_channel": false, 00:35:44.306 "listen_address": { 00:35:44.306 "trtype": "tcp", 00:35:44.306 "traddr": "127.0.0.1", 00:35:44.306 "trsvcid": "4420" 00:35:44.306 }, 00:35:44.306 "method": "nvmf_subsystem_add_listener", 00:35:44.306 "req_id": 1 00:35:44.306 } 00:35:44.306 Got JSON-RPC error response 00:35:44.306 response: 00:35:44.306 { 00:35:44.306 "code": -32602, 00:35:44.306 "message": "Invalid parameters" 00:35:44.306 } 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:44.306 09:37:45 keyring_file -- keyring/file.sh@47 -- # bperfpid=1392823 00:35:44.306 09:37:45 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1392823 /var/tmp/bperf.sock 00:35:44.306 09:37:45 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1392823 ']' 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:44.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:44.306 09:37:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:44.565 [2024-11-19 09:37:45.378726] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:35:44.565 [2024-11-19 09:37:45.378770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1392823 ] 00:35:44.565 [2024-11-19 09:37:45.453762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:44.565 [2024-11-19 09:37:45.496474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:44.565 09:37:45 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:44.565 09:37:45 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:35:44.565 09:37:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:44.565 09:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:44.823 09:37:45 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fQC6oJ7sBj 00:35:44.823 09:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fQC6oJ7sBj 00:35:45.080 09:37:45 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:45.080 09:37:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:45.080 09:37:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.080 09:37:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.080 09:37:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.339 09:37:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QVIELgkpEu == \/\t\m\p\/\t\m\p\.\Q\V\I\E\L\g\k\p\E\u ]] 00:35:45.339 09:37:46 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:45.339 09:37:46 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:45.339 09:37:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.339 09:37:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.339 09:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.597 09:37:46 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.fQC6oJ7sBj == \/\t\m\p\/\t\m\p\.\f\Q\C\6\o\J\7\s\B\j ]] 00:35:45.597 09:37:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.597 09:37:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:45.597 09:37:46 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:45.597 09:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.855 09:37:46 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:45.855 09:37:46 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.855 09:37:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:46.113 [2024-11-19 09:37:46.982942] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:46.113 nvme0n1 00:35:46.113 09:37:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:46.113 09:37:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.113 09:37:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.113 09:37:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.113 09:37:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.113 09:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.370 09:37:47 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:46.371 09:37:47 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:46.371 09:37:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:46.371 09:37:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.371 09:37:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.371 09:37:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:46.371 09:37:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.628 09:37:47 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:46.628 09:37:47 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.628 Running I/O for 1 seconds... 00:35:47.562 18826.00 IOPS, 73.54 MiB/s 00:35:47.562 Latency(us) 00:35:47.562 [2024-11-19T08:37:48.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.562 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:47.562 nvme0n1 : 1.00 18873.47 73.72 0.00 0.00 6770.06 4217.10 12651.30 00:35:47.562 [2024-11-19T08:37:48.621Z] =================================================================================================================== 00:35:47.562 [2024-11-19T08:37:48.621Z] Total : 18873.47 73.72 0.00 0.00 6770.06 4217.10 12651.30 00:35:47.562 { 00:35:47.562 "results": [ 00:35:47.562 { 00:35:47.562 "job": "nvme0n1", 00:35:47.562 "core_mask": "0x2", 00:35:47.562 "workload": "randrw", 00:35:47.562 "percentage": 50, 00:35:47.562 "status": "finished", 00:35:47.562 "queue_depth": 128, 00:35:47.562 "io_size": 4096, 00:35:47.562 "runtime": 1.004267, 00:35:47.562 "iops": 18873.466916666584, 00:35:47.562 "mibps": 73.72448014322885, 00:35:47.562 "io_failed": 0, 00:35:47.562 "io_timeout": 0, 00:35:47.562 "avg_latency_us": 6770.057677030431, 00:35:47.562 "min_latency_us": 4217.099130434783, 00:35:47.562 "max_latency_us": 12651.297391304348 00:35:47.562 } 00:35:47.562 ], 00:35:47.562 "core_count": 1 00:35:47.562 } 00:35:47.562 09:37:48 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:47.563 09:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:47.821 09:37:48 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:47.821 09:37:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.821 09:37:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.821 09:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.821 09:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.821 09:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.079 09:37:48 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:48.079 09:37:48 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:48.079 09:37:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:48.079 09:37:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.079 09:37:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:48.079 09:37:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.079 09:37:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.337 09:37:49 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:48.337 09:37:49 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.337 09:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.337 [2024-11-19 09:37:49.370501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:48.337 [2024-11-19 09:37:49.371332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121ed00 (107): Transport endpoint is not connected 00:35:48.337 [2024-11-19 09:37:49.372327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121ed00 (9): Bad file descriptor 00:35:48.337 [2024-11-19 09:37:49.373328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:48.337 [2024-11-19 09:37:49.373338] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:48.337 [2024-11-19 09:37:49.373345] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:48.337 [2024-11-19 09:37:49.373355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:48.337 request: 00:35:48.337 { 00:35:48.337 "name": "nvme0", 00:35:48.337 "trtype": "tcp", 00:35:48.337 "traddr": "127.0.0.1", 00:35:48.337 "adrfam": "ipv4", 00:35:48.337 "trsvcid": "4420", 00:35:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.337 "prchk_reftag": false, 00:35:48.337 "prchk_guard": false, 00:35:48.337 "hdgst": false, 00:35:48.337 "ddgst": false, 00:35:48.337 "psk": "key1", 00:35:48.337 "allow_unrecognized_csi": false, 00:35:48.337 "method": "bdev_nvme_attach_controller", 00:35:48.337 "req_id": 1 00:35:48.337 } 00:35:48.337 Got JSON-RPC error response 00:35:48.337 response: 00:35:48.337 { 00:35:48.337 "code": -5, 00:35:48.337 "message": "Input/output error" 00:35:48.337 } 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:48.337 09:37:49 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:48.595 09:37:49 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.595 09:37:49 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:48.595 09:37:49 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:48.595 09:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.853 09:37:49 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:48.853 09:37:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:48.853 09:37:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:49.111 09:37:50 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:49.111 09:37:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:49.370 09:37:50 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:49.370 09:37:50 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:49.370 09:37:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.370 09:37:50 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:49.370 09:37:50 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.QVIELgkpEu 00:35:49.370 09:37:50 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.370 09:37:50 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:49.370 09:37:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:49.630 [2024-11-19 09:37:50.597527] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QVIELgkpEu': 0100660 00:35:49.630 [2024-11-19 09:37:50.597556] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:49.630 request: 00:35:49.630 { 00:35:49.630 "name": "key0", 00:35:49.630 "path": "/tmp/tmp.QVIELgkpEu", 00:35:49.630 "method": "keyring_file_add_key", 00:35:49.630 "req_id": 1 00:35:49.630 } 00:35:49.630 Got JSON-RPC error response 00:35:49.630 response: 00:35:49.630 { 00:35:49.630 "code": -1, 00:35:49.630 "message": "Operation not permitted" 00:35:49.630 } 00:35:49.630 09:37:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:49.630 09:37:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:49.630 09:37:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:49.630 09:37:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:49.630 09:37:50 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.QVIELgkpEu 00:35:49.630 09:37:50 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:49.630 09:37:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QVIELgkpEu 00:35:49.891 09:37:50 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.QVIELgkpEu 00:35:49.891 09:37:50 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:49.891 09:37:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.891 09:37:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.891 09:37:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.891 09:37:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.891 09:37:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.152 09:37:51 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:50.152 09:37:51 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.152 09:37:51 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.152 09:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.152 [2024-11-19 09:37:51.195113] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QVIELgkpEu': No such file or directory 00:35:50.152 [2024-11-19 09:37:51.195135] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:50.152 [2024-11-19 09:37:51.195151] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:50.152 [2024-11-19 09:37:51.195158] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:50.152 [2024-11-19 09:37:51.195165] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:50.152 [2024-11-19 09:37:51.195171] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:50.152 request: 00:35:50.152 { 00:35:50.152 "name": "nvme0", 00:35:50.152 "trtype": "tcp", 00:35:50.152 "traddr": "127.0.0.1", 00:35:50.152 "adrfam": "ipv4", 00:35:50.152 "trsvcid": "4420", 00:35:50.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.152 "prchk_reftag": false, 00:35:50.152 "prchk_guard": false, 00:35:50.152 "hdgst": false, 00:35:50.152 "ddgst": false, 00:35:50.152 "psk": "key0", 00:35:50.152 "allow_unrecognized_csi": false, 00:35:50.152 "method": "bdev_nvme_attach_controller", 00:35:50.152 "req_id": 1 00:35:50.152 } 00:35:50.152 Got JSON-RPC error response 00:35:50.152 response: 00:35:50.152 { 00:35:50.152 "code": -19, 00:35:50.152 "message": "No such device" 00:35:50.152 } 00:35:50.409 09:37:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:50.409 09:37:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.409 09:37:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.410 09:37:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.410 09:37:51 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:50.410 09:37:51 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TVLJGfvDAY 00:35:50.410 09:37:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:50.410 09:37:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:50.410 09:37:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:50.410 09:37:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:50.410 09:37:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:50.410 09:37:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:50.410 09:37:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:50.667 09:37:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TVLJGfvDAY 00:35:50.667 09:37:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TVLJGfvDAY 00:35:50.667 09:37:51 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.TVLJGfvDAY 00:35:50.667 09:37:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TVLJGfvDAY 00:35:50.667 09:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TVLJGfvDAY 00:35:50.667 09:37:51 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.667 09:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.925 nvme0n1 00:35:50.925 09:37:51 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:50.925 09:37:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.925 09:37:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.925 09:37:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.925 09:37:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.925 09:37:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.182 09:37:52 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:51.182 09:37:52 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:51.182 09:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:51.441 09:37:52 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:51.441 09:37:52 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:51.441 09:37:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.441 09:37:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.441 09:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.699 09:37:52 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:51.699 09:37:52 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:51.699 09:37:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.699 09:37:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.699 09:37:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.699 09:37:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.699 09:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.957 09:37:52 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:51.957 09:37:52 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:51.957 09:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:51.957 09:37:52 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:51.957 09:37:52 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:51.957 09:37:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.215 09:37:53 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:52.215 09:37:53 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TVLJGfvDAY 00:35:52.215 09:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TVLJGfvDAY 00:35:52.473 09:37:53 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fQC6oJ7sBj 00:35:52.473 09:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fQC6oJ7sBj 00:35:52.731 09:37:53 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.731 09:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.989 nvme0n1 00:35:52.989 09:37:53 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:52.989 09:37:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:53.247 09:37:54 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:53.247 "subsystems": [ 00:35:53.247 { 00:35:53.247 "subsystem": "keyring", 00:35:53.247 "config": [ 00:35:53.247 { 00:35:53.247 "method": "keyring_file_add_key", 00:35:53.247 "params": { 00:35:53.247 "name": "key0", 00:35:53.247 "path": "/tmp/tmp.TVLJGfvDAY" 00:35:53.247 } 00:35:53.247 }, 00:35:53.247 { 00:35:53.247 "method": "keyring_file_add_key", 00:35:53.247 "params": { 00:35:53.247 "name": "key1", 00:35:53.247 "path": "/tmp/tmp.fQC6oJ7sBj" 00:35:53.247 } 00:35:53.247 } 00:35:53.247 ] 00:35:53.247 }, 00:35:53.247 { 00:35:53.247 "subsystem": "iobuf", 00:35:53.247 "config": [ 00:35:53.247 { 00:35:53.247 "method": "iobuf_set_options", 00:35:53.247 "params": { 00:35:53.247 "small_pool_count": 8192, 00:35:53.247 "large_pool_count": 1024, 00:35:53.247 "small_bufsize": 8192, 00:35:53.247 "large_bufsize": 135168, 00:35:53.247 "enable_numa": false 00:35:53.247 } 00:35:53.247 } 00:35:53.247 ] 00:35:53.247 }, 00:35:53.247 { 00:35:53.247 "subsystem": "sock", 00:35:53.247 "config": [ 00:35:53.247 { 00:35:53.247 "method": "sock_set_default_impl", 00:35:53.247 "params": { 00:35:53.247 "impl_name": "posix" 00:35:53.247 } 00:35:53.247 }, 00:35:53.247 { 00:35:53.247 "method": "sock_impl_set_options", 00:35:53.247 "params": { 00:35:53.247 "impl_name": "ssl", 00:35:53.247 "recv_buf_size": 4096, 00:35:53.247 "send_buf_size": 4096, 00:35:53.247 "enable_recv_pipe": true, 00:35:53.247 "enable_quickack": false, 00:35:53.247 "enable_placement_id": 0, 00:35:53.247 "enable_zerocopy_send_server": true, 00:35:53.247 "enable_zerocopy_send_client": false, 00:35:53.247 "zerocopy_threshold": 0, 00:35:53.247 "tls_version": 0, 00:35:53.247 "enable_ktls": false 00:35:53.247 } 00:35:53.247 }, 00:35:53.247 { 00:35:53.247 "method": "sock_impl_set_options", 00:35:53.247 "params": { 00:35:53.247 "impl_name": "posix", 00:35:53.247 "recv_buf_size": 2097152, 00:35:53.247 "send_buf_size": 2097152, 00:35:53.247 "enable_recv_pipe": true, 00:35:53.247 "enable_quickack": false, 00:35:53.247 "enable_placement_id": 0, 00:35:53.247 "enable_zerocopy_send_server": true, 00:35:53.247 "enable_zerocopy_send_client": false, 00:35:53.247 "zerocopy_threshold": 0, 00:35:53.248 "tls_version": 0, 00:35:53.248 "enable_ktls": false 00:35:53.248 } 00:35:53.248 } 00:35:53.248 ] 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "subsystem": "vmd", 00:35:53.248 "config": [] 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "subsystem": "accel", 00:35:53.248 "config": [ 00:35:53.248 { 00:35:53.248 "method": "accel_set_options", 00:35:53.248 "params": { 00:35:53.248 "small_cache_size": 128, 00:35:53.248 "large_cache_size": 16, 00:35:53.248 "task_count": 2048, 00:35:53.248 "sequence_count": 2048, 00:35:53.248 "buf_count": 2048 00:35:53.248 } 00:35:53.248 } 00:35:53.248 ] 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "subsystem": "bdev", 00:35:53.248 "config": [ 00:35:53.248 { 00:35:53.248 "method": "bdev_set_options", 00:35:53.248 "params": { 00:35:53.248 "bdev_io_pool_size": 65535, 00:35:53.248 "bdev_io_cache_size": 256, 00:35:53.248 "bdev_auto_examine": true, 00:35:53.248 "iobuf_small_cache_size": 128, 00:35:53.248 "iobuf_large_cache_size": 16 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "bdev_raid_set_options", 00:35:53.248 "params": { 00:35:53.248 "process_window_size_kb": 1024, 00:35:53.248 "process_max_bandwidth_mb_sec": 0 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "bdev_iscsi_set_options", 00:35:53.248 "params": { 00:35:53.248 "timeout_sec": 30 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "bdev_nvme_set_options", 00:35:53.248 "params": { 00:35:53.248 "action_on_timeout": "none", 00:35:53.248 "timeout_us": 0, 00:35:53.248 "timeout_admin_us": 0, 00:35:53.248 "keep_alive_timeout_ms": 10000, 00:35:53.248 "arbitration_burst": 0, 00:35:53.248 "low_priority_weight": 0, 00:35:53.248 "medium_priority_weight": 0, 00:35:53.248 "high_priority_weight": 0, 00:35:53.248 "nvme_adminq_poll_period_us": 10000, 00:35:53.248 "nvme_ioq_poll_period_us": 0, 00:35:53.248 "io_queue_requests": 512, 00:35:53.248 "delay_cmd_submit": true, 00:35:53.248 "transport_retry_count": 4, 00:35:53.248 "bdev_retry_count": 3, 00:35:53.248 "transport_ack_timeout": 0, 00:35:53.248 "ctrlr_loss_timeout_sec": 0, 00:35:53.248 "reconnect_delay_sec": 0, 00:35:53.248 "fast_io_fail_timeout_sec": 0, 00:35:53.248 "disable_auto_failback": false, 00:35:53.248 "generate_uuids": false, 00:35:53.248 "transport_tos": 0, 00:35:53.248 "nvme_error_stat": false, 00:35:53.248 "rdma_srq_size": 0, 00:35:53.248 "io_path_stat": false, 00:35:53.248 "allow_accel_sequence": false, 00:35:53.248 "rdma_max_cq_size": 0, 00:35:53.248 "rdma_cm_event_timeout_ms": 0, 00:35:53.248 "dhchap_digests": [ 00:35:53.248 "sha256", 00:35:53.248 "sha384", 00:35:53.248 "sha512" 00:35:53.248 ], 00:35:53.248 "dhchap_dhgroups": [ 00:35:53.248 "null", 00:35:53.248 "ffdhe2048", 00:35:53.248 "ffdhe3072", 00:35:53.248 "ffdhe4096", 00:35:53.248 "ffdhe6144", 00:35:53.248 "ffdhe8192" 00:35:53.248 ] 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "bdev_nvme_attach_controller", 00:35:53.248 "params": { 00:35:53.248 "name": "nvme0", 00:35:53.248 "trtype": "TCP", 00:35:53.248 "adrfam": "IPv4", 00:35:53.248 "traddr": "127.0.0.1", 00:35:53.248 "trsvcid": "4420", 00:35:53.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.248 "prchk_reftag": false, 00:35:53.248 "prchk_guard": false, 00:35:53.248 "ctrlr_loss_timeout_sec": 0, 00:35:53.248 "reconnect_delay_sec": 0, 00:35:53.248 "fast_io_fail_timeout_sec": 0, 00:35:53.248 "psk": "key0", 00:35:53.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.248 "hdgst": false, 00:35:53.248 "ddgst": false, 00:35:53.248 "multipath": "multipath" 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "bdev_nvme_set_hotplug", 00:35:53.248 "params": { 00:35:53.248 "period_us": 100000, 00:35:53.248 "enable": false 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "bdev_wait_for_examine" 00:35:53.248 } 00:35:53.248 ] 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "subsystem": "nbd", 00:35:53.248 "config": [] 00:35:53.248 } 00:35:53.248 ] 00:35:53.248 }' 00:35:53.248 09:37:54 keyring_file -- keyring/file.sh@115 -- # killprocess 1392823 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1392823 ']' 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1392823 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@957 -- # uname 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1392823 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1392823' 00:35:53.248 killing process with pid 1392823 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@971 -- # kill 1392823 00:35:53.248 Received shutdown signal, test time was about 1.000000 seconds 00:35:53.248 00:35:53.248 Latency(us) 00:35:53.248 [2024-11-19T08:37:54.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.248 [2024-11-19T08:37:54.307Z] =================================================================================================================== 00:35:53.248 [2024-11-19T08:37:54.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@976 -- # wait 1392823 00:35:53.248 09:37:54 keyring_file -- keyring/file.sh@118 -- # bperfpid=1394359 00:35:53.248 09:37:54 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1394359 /var/tmp/bperf.sock 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1394359 ']' 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:53.248 09:37:54 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:53.248 09:37:54 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:53.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:53.248 09:37:54 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:53.248 "subsystems": [ 00:35:53.248 { 00:35:53.248 "subsystem": "keyring", 00:35:53.248 "config": [ 00:35:53.248 { 00:35:53.248 "method": "keyring_file_add_key", 00:35:53.248 "params": { 00:35:53.248 "name": "key0", 00:35:53.248 "path": "/tmp/tmp.TVLJGfvDAY" 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "keyring_file_add_key", 00:35:53.248 "params": { 00:35:53.248 "name": "key1", 00:35:53.248 "path": "/tmp/tmp.fQC6oJ7sBj" 00:35:53.248 } 00:35:53.248 } 00:35:53.248 ] 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "subsystem": "iobuf", 00:35:53.248 "config": [ 00:35:53.248 { 00:35:53.248 "method": "iobuf_set_options", 00:35:53.248 "params": { 00:35:53.248 "small_pool_count": 8192, 00:35:53.248 "large_pool_count": 1024, 00:35:53.248 "small_bufsize": 8192, 00:35:53.248 "large_bufsize": 135168, 00:35:53.248 "enable_numa": false 00:35:53.248 } 00:35:53.248 } 00:35:53.248 ] 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "subsystem": "sock", 00:35:53.248 "config": [ 00:35:53.248 { 00:35:53.248 "method": "sock_set_default_impl", 00:35:53.248 "params": { 00:35:53.248 "impl_name": "posix" 00:35:53.248 } 00:35:53.248 }, 00:35:53.248 { 00:35:53.248 "method": "sock_impl_set_options", 00:35:53.248 "params": { 00:35:53.248 "impl_name": "ssl", 00:35:53.248 "recv_buf_size": 4096, 00:35:53.248 "send_buf_size": 4096, 00:35:53.248 "enable_recv_pipe": true, 00:35:53.248 "enable_quickack": false, 00:35:53.248 "enable_placement_id": 0, 00:35:53.249 "enable_zerocopy_send_server": true, 00:35:53.249 "enable_zerocopy_send_client": false, 00:35:53.249 "zerocopy_threshold": 0, 00:35:53.249 "tls_version": 0, 00:35:53.249 "enable_ktls": false 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "sock_impl_set_options", 00:35:53.249 "params": { 00:35:53.249 "impl_name": "posix", 00:35:53.249 "recv_buf_size": 2097152, 00:35:53.249 "send_buf_size": 2097152, 00:35:53.249 "enable_recv_pipe": true, 00:35:53.249 "enable_quickack": false, 00:35:53.249 "enable_placement_id": 0, 00:35:53.249 "enable_zerocopy_send_server": true, 00:35:53.249 "enable_zerocopy_send_client": false, 00:35:53.249 "zerocopy_threshold": 0, 00:35:53.249 "tls_version": 0, 00:35:53.249 "enable_ktls": false 00:35:53.249 } 00:35:53.249 } 00:35:53.249 ] 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "subsystem": "vmd", 00:35:53.249 "config": [] 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "subsystem": "accel", 00:35:53.249 "config": [ 00:35:53.249 { 00:35:53.249 "method": "accel_set_options", 00:35:53.249 "params": { 00:35:53.249 "small_cache_size": 128, 00:35:53.249 "large_cache_size": 16, 00:35:53.249 "task_count": 2048, 00:35:53.249 "sequence_count": 2048, 00:35:53.249 "buf_count": 2048 00:35:53.249 } 00:35:53.249 } 00:35:53.249 ] 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "subsystem": "bdev", 00:35:53.249 "config": [ 00:35:53.249 { 00:35:53.249 "method": "bdev_set_options", 00:35:53.249 "params": { 00:35:53.249 "bdev_io_pool_size": 65535, 00:35:53.249 "bdev_io_cache_size": 256, 00:35:53.249 "bdev_auto_examine": true, 00:35:53.249 "iobuf_small_cache_size": 128, 00:35:53.249 "iobuf_large_cache_size": 16 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "bdev_raid_set_options", 00:35:53.249 "params": { 00:35:53.249 "process_window_size_kb": 1024, 00:35:53.249 "process_max_bandwidth_mb_sec": 0 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "bdev_iscsi_set_options", 00:35:53.249 "params": { 00:35:53.249 "timeout_sec": 30 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "bdev_nvme_set_options", 00:35:53.249 "params": { 00:35:53.249 "action_on_timeout": "none", 00:35:53.249 "timeout_us": 0, 00:35:53.249 "timeout_admin_us": 0, 00:35:53.249 "keep_alive_timeout_ms": 10000, 00:35:53.249 "arbitration_burst": 0, 00:35:53.249 "low_priority_weight": 0, 00:35:53.249 "medium_priority_weight": 0, 00:35:53.249 "high_priority_weight": 0, 00:35:53.249 "nvme_adminq_poll_period_us": 10000, 00:35:53.249 "nvme_ioq_poll_period_us": 0, 00:35:53.249 "io_queue_requests": 512, 00:35:53.249 "delay_cmd_submit": true, 00:35:53.249 "transport_retry_count": 4, 00:35:53.249 "bdev_retry_count": 3, 00:35:53.249 "transport_ack_timeout": 0, 00:35:53.249 "ctrlr_loss_timeout_sec": 0, 00:35:53.249 "reconnect_delay_sec": 0, 00:35:53.249 "fast_io_fail_timeout_sec": 0, 00:35:53.249 "disable_auto_failback": false, 00:35:53.249 "generate_uuids": false, 00:35:53.249 "transport_tos": 0, 00:35:53.249 "nvme_error_stat": false, 00:35:53.249 "rdma_srq_size": 0, 00:35:53.249 "io_path_stat": false, 00:35:53.249 "allow_accel_sequence": false, 00:35:53.249 "rdma_max_cq_size": 0, 00:35:53.249 "rdma_cm_event_timeout_ms": 0, 00:35:53.249 "dhchap_digests": [ 00:35:53.249 "sha256", 00:35:53.249 "sha384", 00:35:53.249 "sha512" 00:35:53.249 ], 00:35:53.249 "dhchap_dhgroups": [ 00:35:53.249 "null", 00:35:53.249 "ffdhe2048", 00:35:53.249 "ffdhe3072", 00:35:53.249 "ffdhe4096", 00:35:53.249 "ffdhe6144", 00:35:53.249 "ffdhe8192" 00:35:53.249 ] 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "bdev_nvme_attach_controller", 00:35:53.249 "params": { 00:35:53.249 "name": "nvme0", 00:35:53.249 "trtype": "TCP", 00:35:53.249 "adrfam": "IPv4", 00:35:53.249 "traddr": "127.0.0.1", 00:35:53.249 "trsvcid": "4420", 00:35:53.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:53.249 "prchk_reftag": false, 00:35:53.249 "prchk_guard": false, 00:35:53.249 "ctrlr_loss_timeout_sec": 0, 00:35:53.249 "reconnect_delay_sec": 0, 00:35:53.249 "fast_io_fail_timeout_sec": 0, 00:35:53.249 "psk": "key0", 00:35:53.249 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:53.249 "hdgst": false, 00:35:53.249 "ddgst": false, 00:35:53.249 "multipath": "multipath" 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "bdev_nvme_set_hotplug", 00:35:53.249 "params": { 00:35:53.249 "period_us": 100000, 00:35:53.249 "enable": false 00:35:53.249 } 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "method": "bdev_wait_for_examine" 00:35:53.249 } 00:35:53.249 ] 00:35:53.249 }, 00:35:53.249 { 00:35:53.249 "subsystem": "nbd", 00:35:53.249 "config": [] 00:35:53.249 } 00:35:53.249 ] 00:35:53.249 }' 00:35:53.249 09:37:54 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:53.249 09:37:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:53.507 [2024-11-19 09:37:54.333708] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:35:53.507 [2024-11-19 09:37:54.333756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394359 ] 00:35:53.507 [2024-11-19 09:37:54.409229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.507 [2024-11-19 09:37:54.451845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.765 [2024-11-19 09:37:54.612881] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:54.331 09:37:55 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:54.331 09:37:55 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:35:54.331 09:37:55 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:54.331 09:37:55 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:54.331 09:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.331 09:37:55 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:54.331 09:37:55 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:54.331 09:37:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:54.331 09:37:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.331 09:37:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:54.331 09:37:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.331 09:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.590 09:37:55 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:54.590 09:37:55 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:54.590 09:37:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:54.590 09:37:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:54.590 09:37:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.590 09:37:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:54.590 09:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.848 09:37:55 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:54.848 09:37:55 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:54.848 09:37:55 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:54.848 09:37:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:55.106 09:37:55 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:55.106 09:37:55 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:55.106 09:37:55 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TVLJGfvDAY /tmp/tmp.fQC6oJ7sBj 00:35:55.106 09:37:55 keyring_file -- keyring/file.sh@20 -- # killprocess 1394359 00:35:55.106 09:37:55 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1394359 ']' 00:35:55.106 09:37:55 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1394359 00:35:55.106 09:37:55 keyring_file -- common/autotest_common.sh@957 -- # uname 00:35:55.106 09:37:55 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:55.106 09:37:55 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1394359 00:35:55.106 09:37:56 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:55.106 09:37:56 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:55.106 09:37:56 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1394359' 00:35:55.106 killing process with pid 1394359 00:35:55.106 09:37:56 keyring_file -- common/autotest_common.sh@971 -- # kill 1394359 00:35:55.106 Received shutdown signal, test time was about 1.000000 seconds 00:35:55.106 00:35:55.106 Latency(us) 00:35:55.106 [2024-11-19T08:37:56.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.106 [2024-11-19T08:37:56.165Z] =================================================================================================================== 00:35:55.106 [2024-11-19T08:37:56.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:55.106 09:37:56 keyring_file -- common/autotest_common.sh@976 -- # wait 1394359 00:35:55.365 09:37:56 keyring_file -- keyring/file.sh@21 -- # killprocess 1392814 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1392814 ']' 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1392814 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@957 -- # uname 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1392814 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1392814' 00:35:55.365 killing process with pid 1392814 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@971 -- # kill 1392814 00:35:55.365 09:37:56 keyring_file -- common/autotest_common.sh@976 -- # wait 1392814 00:35:55.623 00:35:55.624 real 0m11.951s 00:35:55.624 user 0m29.803s 00:35:55.624 sys 0m2.712s 00:35:55.624 09:37:56 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:55.624 09:37:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:55.624 ************************************ 00:35:55.624 END TEST keyring_file 00:35:55.624 ************************************ 00:35:55.624 09:37:56 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:35:55.624 09:37:56 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:55.624 09:37:56 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:55.624 09:37:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:55.624 09:37:56 -- common/autotest_common.sh@10 -- # set +x 00:35:55.624 ************************************ 00:35:55.624 START TEST keyring_linux 00:35:55.624 ************************************ 00:35:55.624 09:37:56 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:55.624 Joined session keyring: 691278988 00:35:55.883 * Looking for test storage... 00:35:55.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.883 --rc genhtml_branch_coverage=1 00:35:55.883 --rc genhtml_function_coverage=1 00:35:55.883 --rc genhtml_legend=1 00:35:55.883 --rc geninfo_all_blocks=1 00:35:55.883 --rc geninfo_unexecuted_blocks=1 00:35:55.883 00:35:55.883 ' 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.883 --rc genhtml_branch_coverage=1 00:35:55.883 --rc genhtml_function_coverage=1 00:35:55.883 --rc genhtml_legend=1 00:35:55.883 --rc geninfo_all_blocks=1 00:35:55.883 --rc geninfo_unexecuted_blocks=1 00:35:55.883 00:35:55.883 ' 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.883 --rc genhtml_branch_coverage=1 00:35:55.883 --rc genhtml_function_coverage=1 00:35:55.883 --rc genhtml_legend=1 00:35:55.883 --rc geninfo_all_blocks=1 00:35:55.883 --rc geninfo_unexecuted_blocks=1 00:35:55.883 00:35:55.883 ' 00:35:55.883 09:37:56 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:55.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.883 --rc genhtml_branch_coverage=1 00:35:55.883 --rc genhtml_function_coverage=1 00:35:55.883 --rc genhtml_legend=1 00:35:55.883 --rc geninfo_all_blocks=1 00:35:55.883 --rc geninfo_unexecuted_blocks=1 00:35:55.883 00:35:55.883 ' 00:35:55.883 09:37:56 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:55.883 09:37:56 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.883 09:37:56 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.883 09:37:56 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.883 09:37:56 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.883 09:37:56 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.884 09:37:56 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.884 09:37:56 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:55.884 09:37:56 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:55.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:55.884 /tmp/:spdk-test:key0 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:55.884 09:37:56 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:55.884 09:37:56 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:55.884 /tmp/:spdk-test:key1 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1394889 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:55.884 09:37:56 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1394889 00:35:55.884 09:37:56 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1394889 ']' 00:35:55.884 09:37:56 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.884 09:37:56 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:55.884 09:37:56 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.884 09:37:56 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:55.884 09:37:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.141 [2024-11-19 09:37:56.940373] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:35:56.141 [2024-11-19 09:37:56.940425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394889 ] 00:35:56.141 [2024-11-19 09:37:56.999285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.141 [2024-11-19 09:37:57.042598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:35:56.399 09:37:57 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.399 [2024-11-19 09:37:57.263665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.399 null0 00:35:56.399 [2024-11-19 09:37:57.295721] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:56.399 [2024-11-19 09:37:57.296094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:56.399 09:37:57 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:56.399 22368683 00:35:56.399 09:37:57 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:56.399 992988060 00:35:56.399 09:37:57 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1394974 00:35:56.399 09:37:57 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1394974 /var/tmp/bperf.sock 00:35:56.399 09:37:57 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1394974 ']' 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:56.399 09:37:57 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:56.400 09:37:57 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:56.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:56.400 09:37:57 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:56.400 09:37:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:56.400 [2024-11-19 09:37:57.366876] Starting SPDK v25.01-pre git sha1 a7ec5bc8e / DPDK 24.03.0 initialization... 00:35:56.400 [2024-11-19 09:37:57.366916] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394974 ] 00:35:56.400 [2024-11-19 09:37:57.443247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.657 [2024-11-19 09:37:57.486480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.657 09:37:57 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:56.657 09:37:57 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:35:56.657 09:37:57 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:56.657 09:37:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:56.657 09:37:57 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:56.657 09:37:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:57.221 09:37:57 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:57.221 09:37:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:57.221 [2024-11-19 09:37:58.139292] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:57.221 nvme0n1 00:35:57.221 09:37:58 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:57.221 09:37:58 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:57.221 09:37:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:57.221 09:37:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:57.221 09:37:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:57.221 09:37:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:57.479 09:37:58 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:57.479 09:37:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:57.479 09:37:58 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:57.479 09:37:58 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:57.479 09:37:58 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:57.479 09:37:58 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:57.479 09:37:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:57.736 09:37:58 keyring_linux -- keyring/linux.sh@25 -- # sn=22368683 00:35:57.736 09:37:58 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:57.737 09:37:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:57.737 09:37:58 keyring_linux -- keyring/linux.sh@26 -- # [[ 22368683 == \2\2\3\6\8\6\8\3 ]] 00:35:57.737 09:37:58 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 22368683 00:35:57.737 09:37:58 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:57.737 09:37:58 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:57.737 Running I/O for 1 seconds... 00:35:59.112 21009.00 IOPS, 82.07 MiB/s 00:35:59.112 Latency(us) 00:35:59.112 [2024-11-19T08:38:00.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.112 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:59.112 nvme0n1 : 1.01 21011.29 82.08 0.00 0.00 6071.70 5242.88 12252.38 00:35:59.112 [2024-11-19T08:38:00.171Z] =================================================================================================================== 00:35:59.112 [2024-11-19T08:38:00.171Z] Total : 21011.29 82.08 0.00 0.00 6071.70 5242.88 12252.38 00:35:59.112 { 00:35:59.112 "results": [ 00:35:59.112 { 00:35:59.112 "job": "nvme0n1", 00:35:59.112 "core_mask": "0x2", 00:35:59.112 "workload": "randread", 00:35:59.112 "status": "finished", 00:35:59.112 "queue_depth": 128, 00:35:59.112 "io_size": 4096, 00:35:59.112 "runtime": 1.005983, 00:35:59.112 "iops": 21011.2894551896, 00:35:59.112 "mibps": 82.07534943433437, 00:35:59.112 "io_failed": 0, 00:35:59.112 "io_timeout": 0, 00:35:59.112 "avg_latency_us": 6071.703452795531, 00:35:59.112 "min_latency_us": 5242.88, 00:35:59.112 "max_latency_us": 12252.382608695652 00:35:59.112 } 00:35:59.112 ], 00:35:59.112 "core_count": 1 00:35:59.112 } 00:35:59.112 09:37:59 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:59.112 09:37:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:59.112 09:37:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:59.112 09:37:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:59.112 09:37:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:59.112 09:37:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:59.112 09:37:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:59.112 09:37:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.371 09:38:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:59.371 [2024-11-19 09:38:00.384806] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:59.371 [2024-11-19 09:38:00.385190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb4f60 (107): Transport endpoint is not connected 00:35:59.371 [2024-11-19 09:38:00.386185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb4f60 (9): Bad file descriptor 00:35:59.371 [2024-11-19 09:38:00.387187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:59.371 [2024-11-19 09:38:00.387197] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:59.371 [2024-11-19 09:38:00.387205] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:59.371 [2024-11-19 09:38:00.387214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:59.371 request: 00:35:59.371 { 00:35:59.371 "name": "nvme0", 00:35:59.371 "trtype": "tcp", 00:35:59.371 "traddr": "127.0.0.1", 00:35:59.371 "adrfam": "ipv4", 00:35:59.371 "trsvcid": "4420", 00:35:59.371 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.371 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.371 "prchk_reftag": false, 00:35:59.371 "prchk_guard": false, 00:35:59.371 "hdgst": false, 00:35:59.371 "ddgst": false, 00:35:59.371 "psk": ":spdk-test:key1", 00:35:59.371 "allow_unrecognized_csi": false, 00:35:59.371 "method": "bdev_nvme_attach_controller", 00:35:59.371 "req_id": 1 00:35:59.371 } 00:35:59.371 Got JSON-RPC error response 00:35:59.371 response: 00:35:59.371 { 00:35:59.371 "code": -5, 00:35:59.371 "message": "Input/output error" 00:35:59.371 } 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:59.371 09:38:00 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@33 -- # sn=22368683 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 22368683 00:35:59.371 1 links removed 00:35:59.371 09:38:00 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@33 -- # sn=992988060 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 992988060 00:35:59.630 1 links removed 00:35:59.630 09:38:00 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1394974 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1394974 ']' 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1394974 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1394974 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1394974' 00:35:59.630 killing process with pid 1394974 00:35:59.630 09:38:00 keyring_linux -- common/autotest_common.sh@971 -- # kill 1394974 00:35:59.631 Received shutdown signal, test time was about 1.000000 seconds 00:35:59.631 00:35:59.631 Latency(us) 00:35:59.631 [2024-11-19T08:38:00.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.631 [2024-11-19T08:38:00.690Z] =================================================================================================================== 00:35:59.631 [2024-11-19T08:38:00.690Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:59.631 09:38:00 keyring_linux -- common/autotest_common.sh@976 -- # wait 1394974 00:35:59.631 09:38:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1394889 00:35:59.631 09:38:00 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1394889 ']' 00:35:59.631 09:38:00 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1394889 00:35:59.631 09:38:00 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:35:59.631 09:38:00 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:59.631 09:38:00 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1394889 00:35:59.889 09:38:00 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:59.889 09:38:00 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:59.889 09:38:00 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1394889' 00:35:59.889 killing process with pid 1394889 00:35:59.889 09:38:00 keyring_linux -- common/autotest_common.sh@971 -- # kill 1394889 00:35:59.889 09:38:00 keyring_linux -- common/autotest_common.sh@976 -- # wait 1394889 00:36:00.148 00:36:00.148 real 0m4.391s 00:36:00.148 user 0m8.379s 00:36:00.148 sys 0m1.446s 00:36:00.148 09:38:00 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:00.148 09:38:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:00.148 ************************************ 00:36:00.148 END TEST keyring_linux 00:36:00.148 ************************************ 00:36:00.148 09:38:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:00.148 09:38:01 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:00.149 09:38:01 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:00.149 09:38:01 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:00.149 09:38:01 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:00.149 09:38:01 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:00.149 09:38:01 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:00.149 09:38:01 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:00.149 09:38:01 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:00.149 09:38:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:00.149 09:38:01 -- common/autotest_common.sh@10 -- # set +x 00:36:00.149 09:38:01 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:00.149 09:38:01 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:36:00.149 09:38:01 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:36:00.149 09:38:01 -- common/autotest_common.sh@10 -- # set +x 00:36:05.423 INFO: APP EXITING 00:36:05.423 INFO: killing all VMs 00:36:05.423 INFO: killing vhost app 00:36:05.423 INFO: EXIT DONE 00:36:07.973 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:07.973 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:07.973 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:11.266 Cleaning 00:36:11.266 Removing: /var/run/dpdk/spdk0/config 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:11.266 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:11.266 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:11.266 Removing: /var/run/dpdk/spdk1/config 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:11.266 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:11.266 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:11.266 Removing: /var/run/dpdk/spdk2/config 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:11.266 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:11.266 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:11.266 Removing: /var/run/dpdk/spdk3/config 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:11.266 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:11.266 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:11.266 Removing: /var/run/dpdk/spdk4/config 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:11.266 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:11.266 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:11.266 Removing: /dev/shm/bdev_svc_trace.1 00:36:11.266 Removing: /dev/shm/nvmf_trace.0 00:36:11.266 Removing: /dev/shm/spdk_tgt_trace.pid917084 00:36:11.266 Removing: /var/run/dpdk/spdk0 00:36:11.266 Removing: /var/run/dpdk/spdk1 00:36:11.266 Removing: /var/run/dpdk/spdk2 00:36:11.266 Removing: /var/run/dpdk/spdk3 00:36:11.266 Removing: /var/run/dpdk/spdk4 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1008422 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1012494 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1057791 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1063208 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1069059 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1075611 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1075684 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1076417 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1077299 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1078216 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1078745 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1078905 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1079133 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1079150 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1079165 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1080066 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1080977 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1081894 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1082366 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1082417 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1082760 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1083857 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1084844 00:36:11.266 Removing: /var/run/dpdk/spdk_pid1092933 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1122281 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1126791 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1128405 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1130239 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1130464 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1130487 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1130718 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1131231 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1133519 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1134334 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1134835 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1136938 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1137422 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1137936 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1142213 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1147740 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1147742 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1147744 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1151614 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1159962 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1163769 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1169684 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1170843 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1172156 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1173486 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1178572 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1183019 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1186967 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1194421 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1194442 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1199144 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1199371 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1199601 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1200004 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1200066 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1204546 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1205120 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1209457 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1212024 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1217385 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1222857 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1232055 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1239268 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1239271 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1257836 00:36:11.267 Removing: /var/run/dpdk/spdk_pid1258306 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1258846 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1259468 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1260182 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1260693 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1261164 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1261797 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1265882 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1266116 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1272188 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1272245 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1278222 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1282442 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1291959 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1292647 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1296731 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1297155 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1301185 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1307055 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1309657 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1319598 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1328836 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1330604 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1331478 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1347547 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1351455 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1354148 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1362161 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1362277 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1367357 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1369304 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1371654 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1372865 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1374841 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1375903 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1384656 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1385116 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1385578 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1387922 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1388478 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1388988 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1392814 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1392823 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1394359 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1394889 00:36:11.527 Removing: /var/run/dpdk/spdk_pid1394974 00:36:11.527 Removing: /var/run/dpdk/spdk_pid775638 00:36:11.527 Removing: /var/run/dpdk/spdk_pid914544 00:36:11.527 Removing: /var/run/dpdk/spdk_pid916000 00:36:11.527 Removing: /var/run/dpdk/spdk_pid917084 00:36:11.527 Removing: /var/run/dpdk/spdk_pid917718 00:36:11.527 Removing: /var/run/dpdk/spdk_pid918658 00:36:11.527 Removing: /var/run/dpdk/spdk_pid918901 00:36:11.527 Removing: /var/run/dpdk/spdk_pid919878 00:36:11.527 Removing: /var/run/dpdk/spdk_pid919887 00:36:11.527 Removing: /var/run/dpdk/spdk_pid920239 00:36:11.527 Removing: /var/run/dpdk/spdk_pid921761 00:36:11.527 Removing: /var/run/dpdk/spdk_pid923092 00:36:11.527 Removing: /var/run/dpdk/spdk_pid923529 00:36:11.527 Removing: /var/run/dpdk/spdk_pid923725 00:36:11.527 Removing: /var/run/dpdk/spdk_pid923942 00:36:11.527 Removing: /var/run/dpdk/spdk_pid924221 00:36:11.527 Removing: /var/run/dpdk/spdk_pid924472 00:36:11.527 Removing: /var/run/dpdk/spdk_pid924718 00:36:11.527 Removing: /var/run/dpdk/spdk_pid925020 00:36:11.786 Removing: /var/run/dpdk/spdk_pid925752 00:36:11.786 Removing: /var/run/dpdk/spdk_pid928760 00:36:11.786 Removing: /var/run/dpdk/spdk_pid929018 00:36:11.786 Removing: /var/run/dpdk/spdk_pid929272 00:36:11.786 Removing: /var/run/dpdk/spdk_pid929278 00:36:11.786 Removing: /var/run/dpdk/spdk_pid929772 00:36:11.786 Removing: /var/run/dpdk/spdk_pid929783 00:36:11.786 Removing: /var/run/dpdk/spdk_pid930278 00:36:11.786 Removing: /var/run/dpdk/spdk_pid930281 00:36:11.786 Removing: /var/run/dpdk/spdk_pid930543 00:36:11.786 Removing: /var/run/dpdk/spdk_pid930625 00:36:11.786 Removing: /var/run/dpdk/spdk_pid930814 00:36:11.786 Removing: /var/run/dpdk/spdk_pid930955 00:36:11.786 Removing: /var/run/dpdk/spdk_pid931388 00:36:11.786 Removing: /var/run/dpdk/spdk_pid931637 00:36:11.786 Removing: /var/run/dpdk/spdk_pid931930 00:36:11.786 Removing: /var/run/dpdk/spdk_pid935740 00:36:11.786 Removing: /var/run/dpdk/spdk_pid940131 00:36:11.786 Removing: /var/run/dpdk/spdk_pid950384 00:36:11.786 Removing: /var/run/dpdk/spdk_pid951077 00:36:11.786 Removing: /var/run/dpdk/spdk_pid955359 00:36:11.786 Removing: /var/run/dpdk/spdk_pid955738 00:36:11.786 Removing: /var/run/dpdk/spdk_pid960402 00:36:11.786 Removing: /var/run/dpdk/spdk_pid966283 00:36:11.786 Removing: /var/run/dpdk/spdk_pid969029 00:36:11.786 Removing: /var/run/dpdk/spdk_pid979329 00:36:11.786 Removing: /var/run/dpdk/spdk_pid988258 00:36:11.786 Removing: /var/run/dpdk/spdk_pid990090 00:36:11.786 Removing: /var/run/dpdk/spdk_pid991018 00:36:11.786 Clean 00:36:11.786 09:38:12 -- common/autotest_common.sh@1451 -- # return 0 00:36:11.786 09:38:12 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:11.786 09:38:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:11.786 09:38:12 -- common/autotest_common.sh@10 -- # set +x 00:36:11.787 09:38:12 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:11.787 09:38:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:11.787 09:38:12 -- common/autotest_common.sh@10 -- # set +x 00:36:12.045 09:38:12 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:12.045 09:38:12 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:12.045 09:38:12 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:12.045 09:38:12 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:12.045 09:38:12 -- spdk/autotest.sh@394 -- # hostname 00:36:12.045 09:38:12 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:12.045 geninfo: WARNING: invalid characters removed from testname! 00:36:33.976 09:38:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.588 09:38:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.489 09:38:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.393 09:38:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:42.297 09:38:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:44.201 09:38:44 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.104 09:38:46 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:46.104 09:38:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:46.104 09:38:46 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:46.104 09:38:46 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:46.104 09:38:46 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:46.104 09:38:46 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:46.104 + [[ -n 837465 ]] 00:36:46.104 + sudo kill 837465 00:36:46.114 [Pipeline] } 00:36:46.130 [Pipeline] // stage 00:36:46.135 [Pipeline] } 00:36:46.151 [Pipeline] // timeout 00:36:46.156 [Pipeline] } 00:36:46.172 [Pipeline] // catchError 00:36:46.178 [Pipeline] } 00:36:46.193 [Pipeline] // wrap 00:36:46.199 [Pipeline] } 00:36:46.214 [Pipeline] // catchError 00:36:46.224 [Pipeline] stage 00:36:46.227 [Pipeline] { (Epilogue) 00:36:46.243 [Pipeline] catchError 00:36:46.245 [Pipeline] { 00:36:46.260 [Pipeline] echo 00:36:46.262 Cleanup processes 00:36:46.269 [Pipeline] sh 00:36:46.561 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.561 1405588 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.575 [Pipeline] sh 00:36:46.860 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.860 ++ grep -v 'sudo pgrep' 00:36:46.860 ++ awk '{print $1}' 00:36:46.860 + sudo kill -9 00:36:46.860 + true 00:36:46.872 [Pipeline] sh 00:36:47.158 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:59.372 [Pipeline] sh 00:36:59.658 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:59.658 Artifacts sizes are good 00:36:59.674 [Pipeline] archiveArtifacts 00:36:59.681 Archiving artifacts 00:36:59.805 [Pipeline] sh 00:37:00.095 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:00.112 [Pipeline] cleanWs 00:37:00.123 [WS-CLEANUP] Deleting project workspace... 00:37:00.123 [WS-CLEANUP] Deferred wipeout is used... 00:37:00.130 [WS-CLEANUP] done 00:37:00.132 [Pipeline] } 00:37:00.149 [Pipeline] // catchError 00:37:00.161 [Pipeline] sh 00:37:00.470 + logger -p user.info -t JENKINS-CI 00:37:00.479 [Pipeline] } 00:37:00.493 [Pipeline] // stage 00:37:00.498 [Pipeline] } 00:37:00.510 [Pipeline] // node 00:37:00.515 [Pipeline] End of Pipeline 00:37:00.548 Finished: SUCCESS